It’s the crawler at lemmyverse that’s down - https://data.lemmyverse.net/ shows that it hasn’t updated in 11 days
Mostly just used for moderation.
Main account is https://piefed.social/u/andrew_s
It’s the crawler at lemmyverse that’s down - https://data.lemmyverse.net/ shows that it hasn’t updated in 11 days
Yeah, I know what you mean. That note is misleading, and kinda redundant too - you can physically de-select Undetermined in the UI, but the change won’t actually take if you press ‘Save’.
Most likely reason is that you unticked ‘English’ as a language you understand when you were playing around.
No, sorry. There’s !lemmyconnect@lemmy.ca you could ask in, 'cos your comment here might not be seen much.
Looks like it, yeah (though peaks and troughs are to be expected). The next few days won’t show entirely accurate results, because the bot uses data provided by lemmyverse.net, and that’s site’s crawler has been failing
Well, there’s good news and bad news.
The good news is that Lemmy is now surrounding your spoilers with the expected Details and Summary tags, and moving the HR means PieFed is able to interpret the Markdown for both spoilers.
The bad news:
It turns out KBIN doesn’t understand Details/Summary tags (even though a browser on it own does, so that’s KBIN’s problem).
Neither PieFed, or KBIN, or MS Edge looking at raw HTML can properly deal with a list that starts at ‘0’.
Lemmy is no longer putting List tags around anything inside the spoilers. (so this post now looks worse on KBIN. Sorry about that KBIN users)
Firstly, sorry for any potential derailment. This is a comment about the Markdown used in your post (I wouldn’t normally mention it, but consider it fair game since this is a ‘Fediverse’ community).
The spec for lemmy’s spoiler format is colon-colon-colon-space-spoiler. If you miss out the space, then whilst other Lemmy instances can reconstitute the Markdown to see this post as intended, Lemmy itself doesn’t generate the correct HTML when sending it out over ActivityPub. This means that other Fediverse apps that just look at the HTML (e.g. Mastodon, KBIN) can’t render it properly.
Screenshot from kbin:
Also, if you add a horizontal rule without a blank line above it, Markdown generally interprets this as meaning that you want the text above it to be a heading. So anything that doesn’t have the full force of Lemmy’s Markdown processor that is currently trying to re-make the HTML from Markdown now has to deal with the ending triple colons having ‘h2’ tags around it.
Screenshot from piefed:
(apologies again for being off-topic)
Update: for LW, this behaviour stopped around about Friday 12th April. Not sure what changed, but at least the biggest instance isn’t doing it anymore.
I’ve been coerced into reporting it as bug in Lemmy itself - perhaps you could add your own observations here so I seem like less of a crank. Thanks.
I’ve since relented, and filed a bug
Yeah, that’s the conclusion I came away with from the lemmy.ca and endlesstalk.org chats. That’s it due to multiple docker containers. In the LW Matrix room though, an admin said he saw one container send the same activity out 3 times. Also, LW were presumably running multiple containers with 0.18.5, when it didn’t happen, so it maybe that multiple containers is only part of the problem.
When I’ve mentioned this issue to admins at lemmy.ca and endlesstalk.org (relevant posts here and here), they’ve suggested it’s a misconfiguration. When I said the same to lemmy.world admins (relevant comment here), they also suggested it was misconfig. I mentioned it again recently on the LW channel, and it was only then was Lemmy itself proposed as a problem. It happens on plenty of servers, but not all of them, so I don’t know where the fault lies.
A bug report for software I don’t run, and so can’t reproduce would be closed anyway. I think ‘steps to reproduce’ is pretty much the first line in a bug report.
If I ran a server that used someone else’s software to allow users to download a file, and someone told me that every 2nd byte needed to be discarded, I like to think I’d investigate and contact the software vendors if required. I wouldn’t tell the user that it’s something they should be doing. I feel like I’m the user in this scenario.
We were typing at the same time, it seems. I’ve included more info in a comment above, showing that they were POST requests.
Also, the green terminal is outputting part of the body of for each request, to demonstrate. If they weren’t POST requests to /inbox, my server wouldn’t have even picked up them.
EDIT: by ‘server’ I mean the back-end one, the one nginx is reverse-proxying to.
They’ll all POST requests. I trimmed it out of the log for space, but the first 6 requests on the video looked like (nginx shows the data amount for GET, but not POST):
ip.address - - [07/Apr/2024:23:18:44 +0000] "POST /inbox HTTP/1.1" 200 0 "-" "Lemmy/0.19.3; +https://lemmy.world"
ip.address- - [07/Apr/2024:23:18:44 +0000] "POST /inbox HTTP/1.1" 200 0 "-" "Lemmy/0.19.3; +https://lemmy.world"
ip.address - - [07/Apr/2024:23:19:14 +0000] "POST /inbox HTTP/1.1" 200 0 "-" "Lemmy/0.19.3; +https://lemmy.world"
ip.address - - [07/Apr/2024:23:19:14 +0000] "POST /inbox HTTP/1.1" 200 0 "-" "Lemmy/0.19.3; +https://lemmy.world"
ip.address - - [07/Apr/2024:23:19:44 +0000] "POST /inbox HTTP/1.1" 200 0 "-" "Lemmy/0.19.3; +https://lemmy.world"
ip.address - - [07/Apr/2024:23:19:44 +0000] "POST /inbox HTTP/1.1" 200 0 "-" "Lemmy/0.19.3; +https://lemmy.world"
If I was running Lemmy, every second line would say 400, from it rejecting it as a duplicate. In terms of bandwidth, every line represents a full JSON, so I guess it’s about 2K minimum for the standard cruft, plus however much for the actual contents of comment (the comment replying to this would’ve been 8K)
My server just took the requests and dumped the bodies out to a file, and then a script was outputting the object.id, object.type and object.actor into /tmp/demo.txt (which is another confirmation that they were POST requests, of course)
I can’t re-produce anything, because I don’t run Lemmy on my server. It’s possible to infer that’s it’s related to the software (because LW didn’t do this when it was on 0.18.5). However, it’s not something that, for example, lemmy.ml does. An admin on LW matrix chat suggested that it’s likely a combination of instance configuration and software changes, but a bug report from me (who has no idea how LW is set up) wouldn’t be much use.
I’d gently suggest that, if LW admins think it’s a configuration problem, they should talk to other Lemmy admins, and if they think Lemmy itself plays a role, they should talk to the devs. I could be wrong, but this has been happening for a while now, and I don’t get the sense that anyone is talking to anyone about it.
Oh, right. The chat on GitHub is over my head, but I would have thought that solving the problem of instances sending every activity 2 or 3 times would help with that, since even rejecting something as a duplicate must eat up some time.
Hmmm. Speaking of Fediverse interoperability, platforms other than yours (Pandacap) typically arrange things so that
https://pandacap.azurewebsites.net/
was the domain, and something likehttps://pandacap.azurewebsites.net/users/lizard-socks
was the user, but Pandacap wants to usehttps://pandacap.azurewebsites.net/
for both. Combined with the fact that it doesn’t seem to support /.well-known/nodeinfo means that no other platform knows what software it’s running.When your actor sends something out, it uses the id
https://pandacap.azurewebsites.net/
, but when something tries to look that up, it returns a “Person” with a subtly different id ofhttps://pandacap.azurewebsites.net/
(no trailing slash). So there’s the potential to create the following:https://pandacap.azurewebsites.net/
sends something out.https://pandacap.azurewebsites.net/
)https://pandacap.azurewebsites.net/
sends else something out. Instance looks in it’s DB, finds nothing, so looks it up and tries to create it again. The best case is that it meets a DB uniqueness constraint, because the ID it gets back from that lookup does actually exist (so it can use that, but it was a long way around to find it). The worst case - when there’s no DB uniqueness constraint -is that a ‘new’ user is created every time.If every new platform treats the Fediverse as a wheel that needs to be re-invented, then the whole project is doomed.