• 1 Post
  • 30 Comments
Joined 1 year ago
cake
Cake day: July 16th, 2023

help-circle






  • All 9k stars, 10k PRs, 400 forks & professional web site are fake?

    Technically, it is entirely possible to find a real existing project, make a carbon copy of the website (there are automated tools to accomplish this), then have a massive amount of bots give 9K stars and make a lot of PRs, issues and forks (bonus points if these are also copies of actual existing issues/PRs) and generate a fake commit history (this should be entirely possible with git), a bunch of releases could be quickly generated too. Though you would probably be able to notice pretty quickly that timestamps don’t match since I don’t think github features like issues can have fake timestamps (unlike git)

    though I don’t think this has ever actually been done, there are services that claim to sell not only stars but issues, pull requests and forks too. Though assuming the service is not just a scam in itself, any cursory look at the contents of the issues etc would probably give away that they are AI generated



  • looks like work on the android client started in 2011 (or at least, that’s when it seemingly started using version control)

    the app was released in 2014

    so it has likely inherited decisions from ~14 years ago, I’d guess there is a several year gap where having a native desktop app was not even a concern

    Also the smartphone landscape was totally different back then, QT’s android support back then was in alpha (or totally nonexistent if the signal project is a bit older than the github repository makes it seem), and the average smartphone had extremely weak processing power and a tiny screen resolution by today’s standards. Making the same gui function on both desktop and mobile was probably a pretty ridiculous proposition.













  • They could do it without recompilation, but something like changing the obfuscation and recompiling for every copy would likely make it much harder to get rid of the watermarks even if you can compare several different copies

    (though they could also have multiple watermarked sections so that any group of for example 3 copies would have some section that is identical, but still watermarked and would uniquely identify all three leakers. The amount of data you need the watermarks to contain goes up exponentially with the amount of distinct copies, but if you have say 1000 review copies and want to be resistant to 4 copies being “merged”, you only need to distinguish between 1000^4 combinations so you can theoretically get away with a watermark that only contains about 40 bits of data )