A small tool to view real-world ActivityPub objects as JSON! Enter a URL
or username from Mastodon or a similar service below, and we'll send a
request with
the right
Accept
header
to the server to view the underlying object.
{
"@context": [
"https://www.w3.org/ns/activitystreams",
{
"ostatus": "http://ostatus.org#",
"atomUri": "ostatus:atomUri",
"inReplyToAtomUri": "ostatus:inReplyToAtomUri",
"conversation": "ostatus:conversation",
"sensitive": "as:sensitive",
"toot": "http://joinmastodon.org/ns#",
"votersCount": "toot:votersCount",
"Hashtag": "as:Hashtag"
}
],
"id": "https://mstdn.social/users/hobs/statuses/110183110045582640",
"type": "Note",
"summary": null,
"inReplyTo": "https://mstdn.social/users/rysiek/statuses/110180508107178466",
"published": "2023-04-12T00:53:54Z",
"url": "https://mstdn.social/@hobs/110183110045582640",
"attributedTo": "https://mstdn.social/users/hobs",
"to": [
"https://www.w3.org/ns/activitystreams#Public"
],
"cc": [
"https://mstdn.social/users/hobs/followers",
"https://mstdn.social/users/rysiek",
"https://pleroma.pch.net/users/woody"
],
"sensitive": false,
"atomUri": "https://mstdn.social/users/hobs/statuses/110183110045582640",
"inReplyToAtomUri": "https://mstdn.social/users/rysiek/statuses/110180508107178466",
"conversation": "tag:mstdn.social,2023-04-11:objectId=190774127:objectType=Conversation",
"content": "<p><span class=\"h-card\" translate=\"no\"><a href=\"https://mstdn.social/@rysiek\" class=\"u-url mention\">@<span>rysiek</span></a></span> <span class=\"h-card\" translate=\"no\"><a href=\"https://pleroma.pch.net/users/woody\" class=\"u-url mention\">@<span>woody</span></a></span> The first step in controlling or regulating AI is predicting what it will do next. <br />( <a href=\"https://mstdn.social/tags/AIControlProblem\" class=\"mention hashtag\" rel=\"tag\">#<span>AIControlProblem</span></a> <a href=\"https://mstdn.social/tags/AISafety\" class=\"mention hashtag\" rel=\"tag\">#<span>AISafety</span></a> <a href=\"https://mstdn.social/tags/AIAlignment\" class=\"mention hashtag\" rel=\"tag\">#<span>AIAlignment</span></a> - <a href=\"https://en.m.wikipedia.org/wiki/AI_alignment\" target=\"_blank\" rel=\"nofollow noopener noreferrer\" translate=\"no\"><span class=\"invisible\">https://</span><span class=\"ellipsis\">en.m.wikipedia.org/wiki/AI_ali</span><span class=\"invisible\">gnment</span></a> )</p><p>And to predict what a system will do next you have to first get good at explaining why it did what it did the last time.</p><p>The smartest researchers think we're decades away from being able to explain deep neural networks. So LLMs & self driving cars keep doing bad things.</p><p><a href=\"https://mstdn.social/tags/AIExplainability\" class=\"mention hashtag\" rel=\"tag\">#<span>AIExplainability</span></a> - <a href=\"https://en.wikipedia.org/wiki/Explainable_artificial_intelligence\" target=\"_blank\" rel=\"nofollow noopener noreferrer\" translate=\"no\"><span class=\"invisible\">https://</span><span class=\"ellipsis\">en.wikipedia.org/wiki/Explaina</span><span class=\"invisible\">ble_artificial_intelligence</span></a></p>",
"contentMap": {
"en": "<p><span class=\"h-card\" translate=\"no\"><a href=\"https://mstdn.social/@rysiek\" class=\"u-url mention\">@<span>rysiek</span></a></span> <span class=\"h-card\" translate=\"no\"><a href=\"https://pleroma.pch.net/users/woody\" class=\"u-url mention\">@<span>woody</span></a></span> The first step in controlling or regulating AI is predicting what it will do next. <br />( <a href=\"https://mstdn.social/tags/AIControlProblem\" class=\"mention hashtag\" rel=\"tag\">#<span>AIControlProblem</span></a> <a href=\"https://mstdn.social/tags/AISafety\" class=\"mention hashtag\" rel=\"tag\">#<span>AISafety</span></a> <a href=\"https://mstdn.social/tags/AIAlignment\" class=\"mention hashtag\" rel=\"tag\">#<span>AIAlignment</span></a> - <a href=\"https://en.m.wikipedia.org/wiki/AI_alignment\" target=\"_blank\" rel=\"nofollow noopener noreferrer\" translate=\"no\"><span class=\"invisible\">https://</span><span class=\"ellipsis\">en.m.wikipedia.org/wiki/AI_ali</span><span class=\"invisible\">gnment</span></a> )</p><p>And to predict what a system will do next you have to first get good at explaining why it did what it did the last time.</p><p>The smartest researchers think we're decades away from being able to explain deep neural networks. So LLMs & self driving cars keep doing bad things.</p><p><a href=\"https://mstdn.social/tags/AIExplainability\" class=\"mention hashtag\" rel=\"tag\">#<span>AIExplainability</span></a> - <a href=\"https://en.wikipedia.org/wiki/Explainable_artificial_intelligence\" target=\"_blank\" rel=\"nofollow noopener noreferrer\" translate=\"no\"><span class=\"invisible\">https://</span><span class=\"ellipsis\">en.wikipedia.org/wiki/Explaina</span><span class=\"invisible\">ble_artificial_intelligence</span></a></p>"
},
"attachment": [],
"tag": [
{
"type": "Mention",
"href": "https://mstdn.social/users/rysiek",
"name": "@rysiek"
},
{
"type": "Mention",
"href": "https://pleroma.pch.net/users/woody",
"name": "@woody@pch.net"
},
{
"type": "Hashtag",
"href": "https://mstdn.social/tags/aicontrolproblem",
"name": "#aicontrolproblem"
},
{
"type": "Hashtag",
"href": "https://mstdn.social/tags/aisafety",
"name": "#aisafety"
},
{
"type": "Hashtag",
"href": "https://mstdn.social/tags/aialignment",
"name": "#aialignment"
},
{
"type": "Hashtag",
"href": "https://mstdn.social/tags/aiexplainability",
"name": "#aiexplainability"
}
],
"replies": {
"id": "https://mstdn.social/users/hobs/statuses/110183110045582640/replies",
"type": "Collection",
"first": {
"type": "CollectionPage",
"next": "https://mstdn.social/users/hobs/statuses/110183110045582640/replies?only_other_accounts=true&page=true",
"partOf": "https://mstdn.social/users/hobs/statuses/110183110045582640/replies",
"items": []
}
},
"likes": {
"id": "https://mstdn.social/users/hobs/statuses/110183110045582640/likes",
"type": "Collection",
"totalItems": 1
},
"shares": {
"id": "https://mstdn.social/users/hobs/statuses/110183110045582640/shares",
"type": "Collection",
"totalItems": 1
}
}