A small tool to view real-world ActivityPub objects as JSON! Enter a URL
or username from Mastodon or a similar service below, and we'll send a
request with
the right
Accept
header
to the server to view the underlying object.
{
"@context": [
"https://www.w3.org/ns/activitystreams",
{
"ostatus": "http://ostatus.org#",
"atomUri": "ostatus:atomUri",
"inReplyToAtomUri": "ostatus:inReplyToAtomUri",
"conversation": "ostatus:conversation",
"sensitive": "as:sensitive",
"toot": "http://joinmastodon.org/ns#",
"votersCount": "toot:votersCount",
"litepub": "http://litepub.social/ns#",
"directMessage": "litepub:directMessage",
"Hashtag": "as:Hashtag"
}
],
"id": "https://neuromatch.social/users/neuralreckoning/statuses/114152720253398303",
"type": "Note",
"summary": null,
"inReplyTo": null,
"published": "2025-03-13T02:17:59Z",
"url": "https://neuromatch.social/@neuralreckoning/114152720253398303",
"attributedTo": "https://neuromatch.social/users/neuralreckoning",
"to": [
"https://www.w3.org/ns/activitystreams#Public"
],
"cc": [
"https://neuromatch.social/users/neuralreckoning/followers"
],
"sensitive": false,
"atomUri": "https://neuromatch.social/users/neuralreckoning/statuses/114152720253398303",
"inReplyToAtomUri": null,
"conversation": "tag:neuromatch.social,2025-03-13:objectId=25143531:objectType=Conversation",
"content": "<p>When I read research papers that are the result of very expensive work (experiments or simulations) I always want to know: how could this project have possibly ended with a null result? And is there an argument in this paper that compares the actual result to this null? If not, I'm very suspicious.</p><p>Actually this is a good question to ask about any paper, but the high stakes of super expensive research make it particularly important to ask the question. In my experience, it is surprisingly rarely answered in the paper and I find it hard to believe in these results.</p><p><a href=\"https://neuromatch.social/tags/science\" class=\"mention hashtag\" rel=\"tag\">#<span>science</span></a> <a href=\"https://neuromatch.social/tags/neuroscience\" class=\"mention hashtag\" rel=\"tag\">#<span>neuroscience</span></a></p>",
"contentMap": {
"en": "<p>When I read research papers that are the result of very expensive work (experiments or simulations) I always want to know: how could this project have possibly ended with a null result? And is there an argument in this paper that compares the actual result to this null? If not, I'm very suspicious.</p><p>Actually this is a good question to ask about any paper, but the high stakes of super expensive research make it particularly important to ask the question. In my experience, it is surprisingly rarely answered in the paper and I find it hard to believe in these results.</p><p><a href=\"https://neuromatch.social/tags/science\" class=\"mention hashtag\" rel=\"tag\">#<span>science</span></a> <a href=\"https://neuromatch.social/tags/neuroscience\" class=\"mention hashtag\" rel=\"tag\">#<span>neuroscience</span></a></p>"
},
"attachment": [],
"tag": [
{
"type": "Hashtag",
"href": "https://neuromatch.social/tags/science",
"name": "#science"
},
{
"type": "Hashtag",
"href": "https://neuromatch.social/tags/neuroscience",
"name": "#neuroscience"
}
],
"replies": {
"id": "https://neuromatch.social/users/neuralreckoning/statuses/114152720253398303/replies",
"type": "Collection",
"first": {
"type": "CollectionPage",
"next": "https://neuromatch.social/users/neuralreckoning/statuses/114152720253398303/replies?only_other_accounts=true&page=true",
"partOf": "https://neuromatch.social/users/neuralreckoning/statuses/114152720253398303/replies",
"items": []
}
},
"likes": {
"id": "https://neuromatch.social/users/neuralreckoning/statuses/114152720253398303/likes",
"type": "Collection",
"totalItems": 15
},
"shares": {
"id": "https://neuromatch.social/users/neuralreckoning/statuses/114152720253398303/shares",
"type": "Collection",
"totalItems": 16
}
}