ActivityPub Viewer

A small tool to view real-world ActivityPub objects as JSON! Enter a URL or username from Mastodon or a similar service below, and we'll send a request with the right Accept header to the server to view the underlying object.

Open in browser →
{ "@context": [ "https://www.w3.org/ns/activitystreams", { "ostatus": "http://ostatus.org#", "atomUri": "ostatus:atomUri", "inReplyToAtomUri": "ostatus:inReplyToAtomUri", "conversation": "ostatus:conversation", "sensitive": "as:sensitive", "toot": "http://joinmastodon.org/ns#", "votersCount": "toot:votersCount", "litepub": "http://litepub.social/ns#", "directMessage": "litepub:directMessage", "blurhash": "toot:blurhash", "focalPoint": { "@container": "@list", "@id": "toot:focalPoint" } } ], "id": "https://infosec.exchange/users/b4rbito/statuses/110694166363950571", "type": "Note", "summary": null, "inReplyTo": "https://mastodon.online/users/ezhes_/statuses/110693965685465641", "published": "2023-07-11T07:02:14Z", "url": "https://infosec.exchange/@b4rbito/110694166363950571", "attributedTo": "https://infosec.exchange/users/b4rbito", "to": [ "https://www.w3.org/ns/activitystreams#Public" ], "cc": [ "https://infosec.exchange/users/b4rbito/followers", "https://mastodon.online/users/ezhes_", "https://infosec.exchange/users/vusec" ], "sensitive": false, "atomUri": "https://infosec.exchange/users/b4rbito/statuses/110694166363950571", "inReplyToAtomUri": "https://mastodon.online/users/ezhes_/statuses/110693965685465641", "conversation": "tag:mastodon.online,2023-07-11:objectId=190068566:objectType=Conversation", "content": "<p><span class=\"h-card\" translate=\"no\"><a href=\"https://mastodon.online/@ezhes_\" class=\"u-url mention\">@<span>ezhes_</span></a></span> <span class=\"h-card\" translate=\"no\"><a href=\"https://infosec.exchange/@vusec\" class=\"u-url mention\">@<span>vusec</span></a></span> Ok now I understood.</p><p>We tried to get detailed root causes of float checks overhead but it&#39;s really complex. I&#39;ll give here some examples:</p><p>[1] Using ubench to compare</p><p>vaddss xmm0, xmm0, [rdi]</p><p>vs</p><p>cmp word ptr [rdi], 0x8b8b8b8b<br />Je 1f<br />1:</p><p>We get the results in the attached figure (left vaddss, right cmp).<br />Cmp+je should be faster (look at clock cycles) however when applied to SPEC benchmarks vaddss is much faster. I guess it is mostly due to the fact that vaddss is very friendly on piepeline scheduling: they can be executed out of order and the result is never used by subsequent instructions avoiding dependency issues.</p><p>[2] Our current instrumentation of mem* family is not ideal. We first loop over all the memory to check for RedZone values, and only after we perform the mem* operation. This is really bad from the cache/memory side. Ideally we should interleave the two but it&#39;s not easy.</p><p>[3] By adding an equivalent amount of &quot;nops&quot; instead of the checks we still get a measurable slow down and an increase of 1.5% in Branch misprediction. So code alignment and intrusive addition of instructions can affect heavily the overhead </p><p>In conclusion we decided to base our claims on SPEC benchmarks. All the above experiments show how the overhead is really affected by a lot of conditions. I do believe that any micro benchmark will hide some source of overheads.<br />On the other hand with this approach it is really hard to determine the root cause</p>", "contentMap": { "en": "<p><span class=\"h-card\" translate=\"no\"><a href=\"https://mastodon.online/@ezhes_\" class=\"u-url mention\">@<span>ezhes_</span></a></span> <span class=\"h-card\" translate=\"no\"><a href=\"https://infosec.exchange/@vusec\" class=\"u-url mention\">@<span>vusec</span></a></span> Ok now I understood.</p><p>We tried to get detailed root causes of float checks overhead but it&#39;s really complex. I&#39;ll give here some examples:</p><p>[1] Using ubench to compare</p><p>vaddss xmm0, xmm0, [rdi]</p><p>vs</p><p>cmp word ptr [rdi], 0x8b8b8b8b<br />Je 1f<br />1:</p><p>We get the results in the attached figure (left vaddss, right cmp).<br />Cmp+je should be faster (look at clock cycles) however when applied to SPEC benchmarks vaddss is much faster. I guess it is mostly due to the fact that vaddss is very friendly on piepeline scheduling: they can be executed out of order and the result is never used by subsequent instructions avoiding dependency issues.</p><p>[2] Our current instrumentation of mem* family is not ideal. We first loop over all the memory to check for RedZone values, and only after we perform the mem* operation. This is really bad from the cache/memory side. Ideally we should interleave the two but it&#39;s not easy.</p><p>[3] By adding an equivalent amount of &quot;nops&quot; instead of the checks we still get a measurable slow down and an increase of 1.5% in Branch misprediction. So code alignment and intrusive addition of instructions can affect heavily the overhead </p><p>In conclusion we decided to base our claims on SPEC benchmarks. All the above experiments show how the overhead is really affected by a lot of conditions. I do believe that any micro benchmark will hide some source of overheads.<br />On the other hand with this approach it is really hard to determine the root cause</p>" }, "updated": "2023-07-11T07:13:55Z", "attachment": [ { "type": "Document", "mediaType": "image/png", "url": "https://media.infosec.exchange/infosec.exchange/media_attachments/files/110/694/127/530/870/177/original/072a1c4af31940cf.png", "name": null, "blurhash": "UiOyxLR%ofNE8yoeayt6%fjuj@oLWAayfQae", "width": 1897, "height": 761 } ], "tag": [ { "type": "Mention", "href": "https://mastodon.online/users/ezhes_", "name": "@ezhes_@mastodon.online" }, { "type": "Mention", "href": "https://infosec.exchange/users/vusec", "name": "@vusec" } ], "replies": { "id": "https://infosec.exchange/users/b4rbito/statuses/110694166363950571/replies", "type": "Collection", "first": { "type": "CollectionPage", "next": "https://infosec.exchange/users/b4rbito/statuses/110694166363950571/replies?only_other_accounts=true&page=true", "partOf": "https://infosec.exchange/users/b4rbito/statuses/110694166363950571/replies", "items": [] } }, "likes": { "id": "https://infosec.exchange/users/b4rbito/statuses/110694166363950571/likes", "type": "Collection", "totalItems": 0 }, "shares": { "id": "https://infosec.exchange/users/b4rbito/statuses/110694166363950571/shares", "type": "Collection", "totalItems": 0 } }