A small tool to view real-world ActivityPub objects as JSON! Enter a URL
or username from Mastodon or a similar service below, and we'll send a
request with
the right
Accept
header
to the server to view the underlying object.
{
"@context": [
"https://www.w3.org/ns/activitystreams",
{
"ostatus": "http://ostatus.org#",
"atomUri": "ostatus:atomUri",
"inReplyToAtomUri": "ostatus:inReplyToAtomUri",
"conversation": "ostatus:conversation",
"sensitive": "as:sensitive",
"toot": "http://joinmastodon.org/ns#",
"votersCount": "toot:votersCount"
}
],
"id": "https://toot.io/users/synlogic/statuses/111245214244381100",
"type": "Note",
"summary": null,
"inReplyTo": null,
"published": "2023-10-16T14:40:58Z",
"url": "https://toot.io/@synlogic/111245214244381100",
"attributedTo": "https://toot.io/users/synlogic",
"to": [
"https://www.w3.org/ns/activitystreams#Public"
],
"cc": [
"https://toot.io/users/synlogic/followers"
],
"sensitive": false,
"atomUri": "https://toot.io/users/synlogic/statuses/111245214244381100",
"inReplyToAtomUri": null,
"conversation": "tag:toot.io,2023-10-16:objectId=37100327:objectType=Conversation",
"content": "<p>latlearn</p><p>my Go-based FOSS lib for latency instrum & reporting</p><p>unlike some other ways to measure & learn latency statistics, latlearn is intended, by design, to be integrated into your code & remain there, enabled, *all* the time</p><p>why?</p><p>1. its overhead is tiny. around 74 ns per span, in many cases -- in 99.9% thats trivial</p><p>2. gives way to *parameterize* reported latencies -- to study O()-style complexity</p><p>3. allows autonomous latency-based dynamic adjustments to logic</p><p><a href=\"https://github.com/mkramlich/latlearn\" target=\"_blank\" rel=\"nofollow noopener noreferrer\" translate=\"no\"><span class=\"invisible\">https://</span><span class=\"\">github.com/mkramlich/latlearn</span><span class=\"invisible\"></span></a></p>",
"contentMap": {
"en": "<p>latlearn</p><p>my Go-based FOSS lib for latency instrum & reporting</p><p>unlike some other ways to measure & learn latency statistics, latlearn is intended, by design, to be integrated into your code & remain there, enabled, *all* the time</p><p>why?</p><p>1. its overhead is tiny. around 74 ns per span, in many cases -- in 99.9% thats trivial</p><p>2. gives way to *parameterize* reported latencies -- to study O()-style complexity</p><p>3. allows autonomous latency-based dynamic adjustments to logic</p><p><a href=\"https://github.com/mkramlich/latlearn\" target=\"_blank\" rel=\"nofollow noopener noreferrer\" translate=\"no\"><span class=\"invisible\">https://</span><span class=\"\">github.com/mkramlich/latlearn</span><span class=\"invisible\"></span></a></p>"
},
"updated": "2023-10-16T14:42:52Z",
"attachment": [],
"tag": [],
"replies": {
"id": "https://toot.io/users/synlogic/statuses/111245214244381100/replies",
"type": "Collection",
"first": {
"type": "CollectionPage",
"next": "https://toot.io/users/synlogic/statuses/111245214244381100/replies?min_id=111245272901087738&page=true",
"partOf": "https://toot.io/users/synlogic/statuses/111245214244381100/replies",
"items": [
"https://toot.io/users/synlogic/statuses/111245272901087738"
]
}
},
"likes": {
"id": "https://toot.io/users/synlogic/statuses/111245214244381100/likes",
"type": "Collection",
"totalItems": 0
},
"shares": {
"id": "https://toot.io/users/synlogic/statuses/111245214244381100/shares",
"type": "Collection",
"totalItems": 1
}
}