A small tool to view real-world ActivityPub objects as JSON! Enter a URL
or username from Mastodon or a similar service below, and we'll send a
request with
the right
Accept
header
to the server to view the underlying object.
{
"@context": [
"https://www.w3.org/ns/activitystreams",
{
"ostatus": "http://ostatus.org#",
"atomUri": "ostatus:atomUri",
"inReplyToAtomUri": "ostatus:inReplyToAtomUri",
"conversation": "ostatus:conversation",
"sensitive": "as:sensitive",
"toot": "http://joinmastodon.org/ns#",
"votersCount": "toot:votersCount",
"Hashtag": "as:Hashtag"
}
],
"id": "https://mathstodon.xyz/users/jef/statuses/114427045535812839",
"type": "Note",
"summary": null,
"inReplyTo": null,
"published": "2025-04-30T13:02:30Z",
"url": "https://mathstodon.xyz/@jef/114427045535812839",
"attributedTo": "https://mathstodon.xyz/users/jef",
"to": [
"https://www.w3.org/ns/activitystreams#Public"
],
"cc": [
"https://mathstodon.xyz/users/jef/followers"
],
"sensitive": false,
"atomUri": "https://mathstodon.xyz/users/jef/statuses/114427045535812839",
"inReplyToAtomUri": null,
"conversation": "tag:mathstodon.xyz,2025-04-30:objectId=150921045:objectType=Conversation",
"content": "<p>I had an eye-opening experience working with Gemini yesterday on my book.</p><p>I experienced Gemini being able to manage a complex, novel chain of thought, _until_ it had to switch context to the conventional view which due to its established popularity, had overwhelmingly more _breadth_ of context in the LLM's training data. At that point its attention mechanism would lose focus on the novel concept and be dominated by the many available associations of the overwhelmingly popular thinking. Repeatedly. When I pointed this out to it, it confirmed my assessment and apologized, saying yes, that is currently its nature as a large language model.</p><p>I was testing my chain of thought on perspectival realism (epistemological, not ontological) and functionalism as a more coherent and extensible foundation for metaethics, I was doing this by engaging Gemini in argument against the classic definition of knowledge as "justified, true, belief" (JTB) and focusing on the weakness of its self-referential use of "True" knowledge to define true knowledge,,,. We went around and around, and as I explained the perspectival/functional viewpoint it was able to explain it back to me and even compose a comprehensive and compelling argument for its coherence, extensibility, and application to a reality that we always only know not for what it _is_, but for what we perceive it _does_ at the expanding boundary of our environment of interaction. But after establishing that it "understood" the new concept, when I then asked the AI to look for weakness in my thinking in contrast with JTB, it would effectively forgot the levels of reasoning providing support for the novel thinking,</p><p><a href=\"https://mathstodon.xyz/tags/ai\" class=\"mention hashtag\" rel=\"tag\">#<span>ai</span></a> <a href=\"https://mathstodon.xyz/tags/llm\" class=\"mention hashtag\" rel=\"tag\">#<span>llm</span></a> <a href=\"https://mathstodon.xyz/tags/epistemology\" class=\"mention hashtag\" rel=\"tag\">#<span>epistemology</span></a> <a href=\"https://mathstodon.xyz/tags/context\" class=\"mention hashtag\" rel=\"tag\">#<span>context</span></a> <a href=\"https://mathstodon.xyz/tags/attention\" class=\"mention hashtag\" rel=\"tag\">#<span>attention</span></a> <a href=\"https://mathstodon.xyz/tags/AoM\" class=\"mention hashtag\" rel=\"tag\">#<span>AoM</span></a></p>",
"contentMap": {
"en": "<p>I had an eye-opening experience working with Gemini yesterday on my book.</p><p>I experienced Gemini being able to manage a complex, novel chain of thought, _until_ it had to switch context to the conventional view which due to its established popularity, had overwhelmingly more _breadth_ of context in the LLM's training data. At that point its attention mechanism would lose focus on the novel concept and be dominated by the many available associations of the overwhelmingly popular thinking. Repeatedly. When I pointed this out to it, it confirmed my assessment and apologized, saying yes, that is currently its nature as a large language model.</p><p>I was testing my chain of thought on perspectival realism (epistemological, not ontological) and functionalism as a more coherent and extensible foundation for metaethics, I was doing this by engaging Gemini in argument against the classic definition of knowledge as "justified, true, belief" (JTB) and focusing on the weakness of its self-referential use of "True" knowledge to define true knowledge,,,. We went around and around, and as I explained the perspectival/functional viewpoint it was able to explain it back to me and even compose a comprehensive and compelling argument for its coherence, extensibility, and application to a reality that we always only know not for what it _is_, but for what we perceive it _does_ at the expanding boundary of our environment of interaction. But after establishing that it "understood" the new concept, when I then asked the AI to look for weakness in my thinking in contrast with JTB, it would effectively forgot the levels of reasoning providing support for the novel thinking,</p><p><a href=\"https://mathstodon.xyz/tags/ai\" class=\"mention hashtag\" rel=\"tag\">#<span>ai</span></a> <a href=\"https://mathstodon.xyz/tags/llm\" class=\"mention hashtag\" rel=\"tag\">#<span>llm</span></a> <a href=\"https://mathstodon.xyz/tags/epistemology\" class=\"mention hashtag\" rel=\"tag\">#<span>epistemology</span></a> <a href=\"https://mathstodon.xyz/tags/context\" class=\"mention hashtag\" rel=\"tag\">#<span>context</span></a> <a href=\"https://mathstodon.xyz/tags/attention\" class=\"mention hashtag\" rel=\"tag\">#<span>attention</span></a> <a href=\"https://mathstodon.xyz/tags/AoM\" class=\"mention hashtag\" rel=\"tag\">#<span>AoM</span></a></p>"
},
"attachment": [],
"tag": [
{
"type": "Hashtag",
"href": "https://mathstodon.xyz/tags/AOM",
"name": "#AOM"
},
{
"type": "Hashtag",
"href": "https://mathstodon.xyz/tags/attention",
"name": "#attention"
},
{
"type": "Hashtag",
"href": "https://mathstodon.xyz/tags/context",
"name": "#context"
},
{
"type": "Hashtag",
"href": "https://mathstodon.xyz/tags/epistemology",
"name": "#epistemology"
},
{
"type": "Hashtag",
"href": "https://mathstodon.xyz/tags/llm",
"name": "#llm"
},
{
"type": "Hashtag",
"href": "https://mathstodon.xyz/tags/ai",
"name": "#ai"
}
],
"replies": {
"id": "https://mathstodon.xyz/users/jef/statuses/114427045535812839/replies",
"type": "Collection",
"first": {
"type": "CollectionPage",
"next": "https://mathstodon.xyz/users/jef/statuses/114427045535812839/replies?only_other_accounts=true&page=true",
"partOf": "https://mathstodon.xyz/users/jef/statuses/114427045535812839/replies",
"items": []
}
},
"likes": {
"id": "https://mathstodon.xyz/users/jef/statuses/114427045535812839/likes",
"type": "Collection",
"totalItems": 1
},
"shares": {
"id": "https://mathstodon.xyz/users/jef/statuses/114427045535812839/shares",
"type": "Collection",
"totalItems": 0
}
}