ActivityPub Viewer

A small tool to view real-world ActivityPub objects as JSON! Enter a URL or username from Mastodon or a similar service below, and we'll send a request with the right Accept header to the server to view the underlying object.

Open in browser →
{ "@context": [ "https://www.w3.org/ns/activitystreams", { "ostatus": "http://ostatus.org#", "atomUri": "ostatus:atomUri", "inReplyToAtomUri": "ostatus:inReplyToAtomUri", "conversation": "ostatus:conversation", "sensitive": "as:sensitive", "toot": "http://joinmastodon.org/ns#", "votersCount": "toot:votersCount" } ], "id": "https://piaille.fr/users/gmic/statuses/113442117070439301", "type": "Note", "summary": null, "inReplyTo": "https://framapiaf.org/users/davidrevoy/statuses/113442041438733895", "published": "2024-11-07T14:22:15Z", "url": "https://piaille.fr/@gmic/113442117070439301", "attributedTo": "https://piaille.fr/users/gmic", "to": [ "https://www.w3.org/ns/activitystreams#Public" ], "cc": [ "https://piaille.fr/users/gmic/followers", "https://framapiaf.org/users/davidrevoy" ], "sensitive": false, "atomUri": "https://piaille.fr/users/gmic/statuses/113442117070439301", "inReplyToAtomUri": "https://framapiaf.org/users/davidrevoy/statuses/113442041438733895", "conversation": "tag:piaille.fr,2024-11-07:objectId=91830207:objectType=Conversation", "content": "<p><span class=\"h-card\" translate=\"no\"><a href=\"https://framapiaf.org/@davidrevoy\" class=\"u-url mention\">@<span>davidrevoy</span></a></span> That&#39;s fine ! I&#39;ll write you ASAP.<br />Yes, the downscale is simulated during the training. For the images, the more is probably the best :) In the training data set I used here (DIV2K), there are 800 images, but I guess a hundred would be already enough (yes, that&#39;s already a lot...).<br />We can try with less anyway, and see what happens.<br />We could also try not only with lineart, but also flat-colorized images, to have a bigger dataset.</p>", "contentMap": { "fr": "<p><span class=\"h-card\" translate=\"no\"><a href=\"https://framapiaf.org/@davidrevoy\" class=\"u-url mention\">@<span>davidrevoy</span></a></span> That&#39;s fine ! I&#39;ll write you ASAP.<br />Yes, the downscale is simulated during the training. For the images, the more is probably the best :) In the training data set I used here (DIV2K), there are 800 images, but I guess a hundred would be already enough (yes, that&#39;s already a lot...).<br />We can try with less anyway, and see what happens.<br />We could also try not only with lineart, but also flat-colorized images, to have a bigger dataset.</p>" }, "attachment": [], "tag": [ { "type": "Mention", "href": "https://framapiaf.org/users/davidrevoy", "name": "@davidrevoy@framapiaf.org" } ], "replies": { "id": "https://piaille.fr/users/gmic/statuses/113442117070439301/replies", "type": "Collection", "first": { "type": "CollectionPage", "next": "https://piaille.fr/users/gmic/statuses/113442117070439301/replies?only_other_accounts=true&page=true", "partOf": "https://piaille.fr/users/gmic/statuses/113442117070439301/replies", "items": [] } }, "likes": { "id": "https://piaille.fr/users/gmic/statuses/113442117070439301/likes", "type": "Collection", "totalItems": 2 }, "shares": { "id": "https://piaille.fr/users/gmic/statuses/113442117070439301/shares", "type": "Collection", "totalItems": 0 } }