Every one always says XMPP and there were a lot of recommendations for ejabberd. I tried this recently and it was a total disaster, I do not have a working chat server. If I followed the docker instructions the server would just crash with no details of what went wrong. Where it should have been creating a default server config file it was instead creating a directory with the wrong permissions then promptly crashing. I tried following their documentation but after about 6 hours of messing about and adding more and more I still couldn’t get a client to login to it. I have no idea how to make this work.
So whatever the solution ultimately is I can’t recommend Ejabberd.


Most of the public has been voting for this the entire time, giving the rich all the money seems to be an immensely popular policy. They are under enormous amounts of daily propaganda but its been obvious the entire time.


A crime against humanity and a breach of his human right to life is not adequately met by an appology, its more adequately dealt with by a trial for those responsible for the death under human rights legislation. We need to stop letting those in power off for their crimes.


Most technology adoption follows an S curve, it can often take a long time to start to get going. Linux has gradually and steadily been improving especially for games and other desktop uses while at the same time Microsoft has been making Windows worse. I feel more that this is Microsoft’s fault, they have abandoned the development of desktop Windows and the advancement of support for modern processor designs and gaming hardware. This has for the first time has let Linux catch up and in many cases exceed Windows capabilities on especially gaming which has always been a stubborn issue. Its still a problem especially in hardware support for VR and other peripherals but its the sort of thing that might sort itself out once the user base grows and companies start producing software for Linux instead.
It might not be enough, but the switching off Windows 10 is causing a change which Microsoft might really regret in a few years.


Initially a lot of the AI was getting trained on lower class GPUs and none of these AI special cards/blades existed. The problem is that the problems are quite large and hence require a lot of VRAM to work on or you split it and pay enormous latency penalties going across the network. Putting it all into one giant package costs a lot more but it also performs a lot better, because AI is not an embarrassingly parallel problem that can be easily split across many GPUs without penalty. So the goal is often to reduce the number of GPUs you need to get a result quickly enough and it brings its own set of problems of power density in server racks.


Maybe one day fusion will finally deliver and we might have cheap and clean energy with no consequences to the environment other than a few big reactors in a country. But until that day arrives and we work that out we have to transfer and Wind, Solar and batteries are winning because they are cheaper than gas, coal and nuclear.


That gets in the way of corporations making money out of “normal market movements”. Its just supply and demand when companies do it, the moment the working class tries to do the same they find themselves getting punished. If its illegal its going to need to be illegal for companies too and this means governments have to start doing their job again and reigning in corporate criminality.


It’s low power that is still making arm small computers popular. It’s impossible to get a pc down into the 2-5 Watt power consumption range and over time it’s the electrical costs that add up. I would suggest the RPI5 is the thing to get because it’s expensive for what it is and more performance is available from other options supported by armbian.
I use a 5600g on b450 ITX board and 4x 8GB Seagate drives and see about 35W idle and about 40W average. It used to be 45W because I was forced to use a GPU in addition to a 3600 to boot (even though its headless, just a bad bios setup that I can’t fix) and getting a CPU with graphics dropped my idle consumption quite a bit. I suspect the extra wattage for your machine is probably the bigger motherboard and the less efficient CPU.
It is possible to get the machine part down into single digits wattage and then about 5W a drive is the floor without spinning them down, so the minimum you could likely see with a much less powerful CPU is about 30-35W.
Make sure none of the exceptions are ticked and the Minimum number of articles to keep per feed is also 25 or below. Then its up to the cron when that runs so you might have to manually purge it and optimise the database to see what it will actually keep.
I can’t say I have ever worried about it, been running FreshRSS for years and it seems to keep its database size in check fairly well and the defaults have worked fine for me and it rarely gets above 100MB. So I know it “loosely” works in that old articles are absolutely getting purged in time but have no idea how strictly it follows these rules.
Everyone has given Linux answers, its also worth knowing quite a lot of UEFI’s contain the ability to secure erase as well. There are a number of USB bootable disk management tools that can do secure erase as well.


No most people in the developed nations earn less than this. It’s heavily biased towards Americans and high earners, the typical just above the minimum wage earner isn’t in this group.


The DMZ for the ISPs router forward to the second router, then everything that hits your outside IP will be forwarded to router 2. Then on Router 2 you open the ports for your service and forward to the internal machine. That should all work fine.


Its quite complicated to setup as well, just went through the instructions and its a long way from just add to docker and run unfortunately. Would be nice to be able to just get a runner in the same or different docker and it just works easily without a lot of manual setup in Linux of directories and users and pipes etc.


I did the same move from contabo to Netcup. Contabo I had all sorts of weird bandwidth limiting problems that I couldn’t explain and which the continued to deny they were throttling. Netcup worked perfectly.


One of the issues here is that being a Chinese paper in any field is still the most correlated with the paper being faked in some way. So while china is producing a lot of science and good science its also the world leader in garbage fake science too and it needs to get a handle on the problem.


The problem is the information asymmetry, there is always another person for a fraudulent company to exploit due to a dysfunctionally expensive court system. Its why we need market level regulations and public institutions that recover peoples money and fine the organisations for their breaches. This sort of thing works a lot better in the EU than in the US due to the sales laws, the ability to return within 2 weeks, default warranty on goods out to 12 months and expectations of goods to be as advertised forced onto the retailers. They work, they need more enforcement from regulatory bodies but retailers do follow them for the most part and quickly change tune when you go to take legal action when they don’t because courts know these laws inside and out.


When fake news as a concept appeared a bit over a decade ago it was all about the traditional media and the lies and narratives they formed in their articles. That same media tried to spin it as about the satire sites like newstrump and most recently their entire spin has been about social media being the cause. I think social media has caught more because its clear to see that some users are spreading a lot of misinformation and you can see others falling into the trap but really what legitimises it all is what the media does and does not platform.
On the one hand they were talking selfhosting and then they pull out multiple $10s thousands rack servers. People don’t need a data centre at home to sync some files, pictures, email and play some media!