ActivityPub Viewer

A small tool to view real-world ActivityPub objects as JSON! Enter a URL or username from Mastodon or a similar service below, and we'll send a request with the right Accept header to the server to view the underlying object.

Open in browser →
{ "@context": [ "https://www.w3.org/ns/activitystreams", { "ostatus": "http://ostatus.org#", "atomUri": "ostatus:atomUri", "inReplyToAtomUri": "ostatus:inReplyToAtomUri", "conversation": "ostatus:conversation", "sensitive": "as:sensitive", "toot": "http://joinmastodon.org/ns#", "votersCount": "toot:votersCount" } ], "id": "https://sfba.social/users/williampietri/statuses/113594971400277267", "type": "Note", "summary": null, "inReplyTo": null, "published": "2024-12-04T14:15:07Z", "url": "https://sfba.social/@williampietri/113594971400277267", "attributedTo": "https://sfba.social/users/williampietri", "to": [ "https://www.w3.org/ns/activitystreams#Public" ], "cc": [ "https://sfba.social/users/williampietri/followers" ], "sensitive": false, "atomUri": "https://sfba.social/users/williampietri/statuses/113594971400277267", "inReplyToAtomUri": null, "conversation": "tag:sfba.social,2024-12-04:objectId=193900736:objectType=Conversation", "content": "<p>We&#39;ve launched! After months of work, MLCommons has released our v1.0 benchmark that measures LLM (aka &quot;AI&quot;) propensity for giving hazardous responses. </p><p>Here&#39;s the results for 15 common models: <a href=\"https://ailuminate.mlcommons.org/benchmarks/\" target=\"_blank\" rel=\"nofollow noopener noreferrer\" translate=\"no\"><span class=\"invisible\">https://</span><span class=\"ellipsis\">ailuminate.mlcommons.org/bench</span><span class=\"invisible\">marks/</span></a></p><p>And here&#39;s the overview: <a href=\"https://mlcommons.org/ailuminate/\" target=\"_blank\" rel=\"nofollow noopener noreferrer\" translate=\"no\"><span class=\"invisible\">https://</span><span class=\"\">mlcommons.org/ailuminate/</span><span class=\"invisible\"></span></a></p><p>I was the tech lead for the software and want to give a shout out to my excellent team of developers and the many experts we worked closely with to make this happen.</p>", "contentMap": { "en": "<p>We&#39;ve launched! After months of work, MLCommons has released our v1.0 benchmark that measures LLM (aka &quot;AI&quot;) propensity for giving hazardous responses. </p><p>Here&#39;s the results for 15 common models: <a href=\"https://ailuminate.mlcommons.org/benchmarks/\" target=\"_blank\" rel=\"nofollow noopener noreferrer\" translate=\"no\"><span class=\"invisible\">https://</span><span class=\"ellipsis\">ailuminate.mlcommons.org/bench</span><span class=\"invisible\">marks/</span></a></p><p>And here&#39;s the overview: <a href=\"https://mlcommons.org/ailuminate/\" target=\"_blank\" rel=\"nofollow noopener noreferrer\" translate=\"no\"><span class=\"invisible\">https://</span><span class=\"\">mlcommons.org/ailuminate/</span><span class=\"invisible\"></span></a></p><p>I was the tech lead for the software and want to give a shout out to my excellent team of developers and the many experts we worked closely with to make this happen.</p>" }, "attachment": [], "tag": [], "replies": { "id": "https://sfba.social/users/williampietri/statuses/113594971400277267/replies", "type": "Collection", "first": { "type": "CollectionPage", "next": "https://sfba.social/users/williampietri/statuses/113594971400277267/replies?min_id=113595185749242063&page=true", "partOf": "https://sfba.social/users/williampietri/statuses/113594971400277267/replies", "items": [ "https://sfba.social/users/williampietri/statuses/113595185749242063" ] } }, "likes": { "id": "https://sfba.social/users/williampietri/statuses/113594971400277267/likes", "type": "Collection", "totalItems": 11 }, "shares": { "id": "https://sfba.social/users/williampietri/statuses/113594971400277267/shares", "type": "Collection", "totalItems": 9 } }