• 0 Posts
  • 16 Comments
Joined 2 years ago
cake
Cake day: June 24th, 2023

help-circle


  • I would love if things weren’t as bad as they looked, but…

    Most of the destruction of buildings in Gaza is of empty buildings with no inhabitants. The IDF blows up or bulldozes buildings when they find booby traps in them, have tunnel entrances, provide military advantage, were used for weapons storage or command, were used as sniper or RPG nests, block lines of sight, to clear security corridors, space for military camps and operations, and so on. The list of reasons is long and liberally applied by the bulldozer operators and sappers on the ground.

    (emphasis mine) While destroying military targets is fair, pretty much every building blocks line of sight, including civilian housing, shops, hospitals, and so on. If applied liberally, this essentially amounts to destroy all buildings. Having your house (and nearby facilities, like shops, schools, hospitals) bulldozed will have a severe negative impact on your ability to live, even if you don’t die in the bulldozing or destruction of your house.

    The IDF warns before major operations and then almost all civilians leave the area. The evacuation of Rafah is a good example for this. There are also targeted attacks, usually by air, in non evacuated areas, but these are only responsible for a small fraction of the destruction.

    (emphasis mine) While the IDF does do this, and this avoids immediate death for many, it still deprives people of human right to housing. Furthermore, a warning does not provide those who evacuate / flee with housing, food and water - for these there are currently significant shortages, while acting on the warning will have a severe negative impact on being able to provide for oneself - one can only carry so much. A disregard for innocent human lives isn’t just civilian deaths, it is also the deprivation of resources that one needs to live.


  • It says ‘a neighborhood’ not 'one neighborhood '. Furthermore, in the article, it specifically mentions it represents other neighborhoods in Gaza.

    A neighborhood provides an example of the disregard for innocent human lives behind the Israeli attacks, with visual proof provided by satellite imagery, even if it is one of many.

    Stating one neighborhood would imply it is the only one. While the NY Times does not have the best track record, it is needlessly reductive for an article that shows what is happening in Gaza. Especially as a picture of a neighborhood can actually be more impactful than the whole: close enough that you can see individual places where people leave, far enough to see the extent of destruction.


  • Also ImageTragick was a thing, there are definitely security implications to adding dependencies to implement a feature in this way (especially on a shared instance). The API at the very least needs to handle auth, so that your images and videos don’t get rotated by others.

    Then you have UX, you may want to show to the user that things have rotated (otherwise button will be deemed non-functional, even if it uses this one-liner behind the scenes), but probably don’t want to transfer the entire video multiple times to show this (too slow, costs data).

    Yeah, it is one thing to add a one liner, but another to make a well implemented feature.


  • It is complicated. It is not technically always, but in practice is may very well be. As this page (in Dutch) notes that, unless the driver can show that ‘overmacht’ applies (they couldn’t have performed any action that would have avoided or reduced bodily harm), they are (at least in part) responsible for damages. For example, not engaging the brakes as soon as it is clear that you would hit them, would still result in them being (partially) liable for costs, even if the cyclist made an error themselves (crossing a red light).

    Because the burden of proof is on the driver, it may be hard to prove that this is the case, resulting in their insurance having to pay up even if they did not do anything wrong.




  • Yes, true, but that is assuming:

    1. Any potential future improvement solely comes from ingesting more useful data.
    2. That the amount of data produced is not ever increasing (even excluding AI slop).
    3. No (new) techniques that makes it more efficient in terms of data required to train are published or engineered.
    4. No (new) techniques that improve reliability are used, e.g. by specializing it for code auditing specifically.

    What the author of the blogpost has shown is that it can find useful issues even now. If you apply this to a codebase, have a human categorize issues by real / fake, and train the thing to make it more likely to generate real issues and less likely to generate false positives, it could still be improved specifically for this application. That does not require nearly as much data as general improvements.

    While I agree that improvements are not a given, I wouldn’t assume that it could never happen anymore. Despite these companies effectively exhausting all of the text on the internet, currently improvements are still being made left-right-and-center. If the many billions they are spending improve these models such that we have a fancy new tool for ensuring our software is more safe and secure: great! If it ends up being an endless money pit, and nothing ever comes from it, oh well. I’ll just wait-and-see which of the two will be the case.


  • Not quite, though. In the blogpost the pentester notes that it found a similar issue (that he overlooked) that occurred elsewhere, in the logoff handler, which the pentester noted and verified when spitting through a number of the reports it generated. Additionally, the pentester noted that the fix it supplied accounted for (and documented) a issue that it accounted for, that his own suggested fix for the issue was (still) susceptible to. This shows that it could be(come) a new tool that allows us to identify issues that are not found with techniques like fuzzing and can even be overlooked by a pentester actively searching for them, never mind a kernel programmer.

    Now, these models generate a ton of false positives, which make the signal-to-noise ratio still much higher than what would be preferred. But the fact that a language model can locate and identify these issues at all, even if sporadically, is already orders of magnitude more than what I would have expected initially. I would have expected it to only hallucinate issues, not finding anything that is remotely like an actual security issue. Much like the spam the curl project is experiencing.