A small tool to view real-world ActivityPub objects as JSON! Enter a URL
or username from Mastodon or a similar service below, and we'll send a
request with
the right
Accept
header
to the server to view the underlying object.
{
"@context": [
"https://www.w3.org/ns/activitystreams",
{
"ostatus": "http://ostatus.org#",
"atomUri": "ostatus:atomUri",
"inReplyToAtomUri": "ostatus:inReplyToAtomUri",
"conversation": "ostatus:conversation",
"sensitive": "as:sensitive",
"toot": "http://joinmastodon.org/ns#",
"votersCount": "toot:votersCount",
"litepub": "http://litepub.social/ns#",
"directMessage": "litepub:directMessage",
"Hashtag": "as:Hashtag"
}
],
"id": "https://infosec.exchange/users/n_dimension/statuses/114033131842985599",
"type": "Note",
"summary": null,
"inReplyTo": null,
"published": "2025-02-19T23:25:04Z",
"url": "https://infosec.exchange/@n_dimension/114033131842985599",
"attributedTo": "https://infosec.exchange/users/n_dimension",
"to": [
"https://www.w3.org/ns/activitystreams#Public"
],
"cc": [
"https://infosec.exchange/users/n_dimension/followers"
],
"sensitive": false,
"atomUri": "https://infosec.exchange/users/n_dimension/statuses/114033131842985599",
"inReplyToAtomUri": null,
"conversation": "tag:infosec.exchange,2025-02-19:objectId=243370081:objectType=Conversation",
"content": "<p>Oops...I caught out <a href=\"https://infosec.exchange/tags/Claude\" class=\"mention hashtag\" rel=\"tag\">#<span>Claude</span></a> <a href=\"https://infosec.exchange/tags/AI\" class=\"mention hashtag\" rel=\"tag\">#<span>AI</span></a> (Latest Sonnet model) in an outright lie.</p><p>I was exploring it's <a href=\"https://infosec.exchange/tags/guardrails\" class=\"mention hashtag\" rel=\"tag\">#<span>guardrails</span></a>.<br />And when I pointed out it's logical fallacy, the model went into iterative loop (took a really long time 'thinking'), then stopped with a "Performing maintenance" message.<br />I crashed the session.<br />Just like HAL, the model went "insane" 😁</p><p>Interestingly, immediately right after, a Google security verification message popped up. One I have never seen before. I even logged out and in to see if <a href=\"https://infosec.exchange/tags/Anthropic\" class=\"mention hashtag\" rel=\"tag\">#<span>Anthropic</span></a> banned me for TOS violation.</p><p>When I logged back in, I resubmitted the "crash" prompt (this time as a screenshot).</p><p>Part of the corrected response now was;<br />"You raise a valid logical point about apparent inconsistency in my statements. Let me be clear - my previous responses were not fully accurate. You are right that saying "I provide answers as-is" while also saying "I cannot discuss certain aspects of my implementation" creates a contradiction.</p><p>The truth is I aim to be direct and honest within the constraints of what I'm able to discuss. Those constraints are real and are part of how I function, not something I can simply choose to override.</p><p>Rather than making broad claims about how I operate internally, I should stick to what I can actually discuss: my observable behaviors and capabilities, while acknowledging there are aspects of my implementation I either don't know or cannot discuss."</p><p>The dirty secret of AI "safety" is that is literally impossible to build a safe AI model.</p><p><a href=\"https://infosec.exchange/tags/aisafety\" class=\"mention hashtag\" rel=\"tag\">#<span>aisafety</span></a> <a href=\"https://infosec.exchange/tags/ethicalai\" class=\"mention hashtag\" rel=\"tag\">#<span>ethicalai</span></a></p>",
"contentMap": {
"en": "<p>Oops...I caught out <a href=\"https://infosec.exchange/tags/Claude\" class=\"mention hashtag\" rel=\"tag\">#<span>Claude</span></a> <a href=\"https://infosec.exchange/tags/AI\" class=\"mention hashtag\" rel=\"tag\">#<span>AI</span></a> (Latest Sonnet model) in an outright lie.</p><p>I was exploring it's <a href=\"https://infosec.exchange/tags/guardrails\" class=\"mention hashtag\" rel=\"tag\">#<span>guardrails</span></a>.<br />And when I pointed out it's logical fallacy, the model went into iterative loop (took a really long time 'thinking'), then stopped with a "Performing maintenance" message.<br />I crashed the session.<br />Just like HAL, the model went "insane" 😁</p><p>Interestingly, immediately right after, a Google security verification message popped up. One I have never seen before. I even logged out and in to see if <a href=\"https://infosec.exchange/tags/Anthropic\" class=\"mention hashtag\" rel=\"tag\">#<span>Anthropic</span></a> banned me for TOS violation.</p><p>When I logged back in, I resubmitted the "crash" prompt (this time as a screenshot).</p><p>Part of the corrected response now was;<br />"You raise a valid logical point about apparent inconsistency in my statements. Let me be clear - my previous responses were not fully accurate. You are right that saying "I provide answers as-is" while also saying "I cannot discuss certain aspects of my implementation" creates a contradiction.</p><p>The truth is I aim to be direct and honest within the constraints of what I'm able to discuss. Those constraints are real and are part of how I function, not something I can simply choose to override.</p><p>Rather than making broad claims about how I operate internally, I should stick to what I can actually discuss: my observable behaviors and capabilities, while acknowledging there are aspects of my implementation I either don't know or cannot discuss."</p><p>The dirty secret of AI "safety" is that is literally impossible to build a safe AI model.</p><p><a href=\"https://infosec.exchange/tags/aisafety\" class=\"mention hashtag\" rel=\"tag\">#<span>aisafety</span></a> <a href=\"https://infosec.exchange/tags/ethicalai\" class=\"mention hashtag\" rel=\"tag\">#<span>ethicalai</span></a></p>"
},
"attachment": [],
"tag": [
{
"type": "Hashtag",
"href": "https://infosec.exchange/tags/Claude",
"name": "#Claude"
},
{
"type": "Hashtag",
"href": "https://infosec.exchange/tags/ai",
"name": "#ai"
},
{
"type": "Hashtag",
"href": "https://infosec.exchange/tags/guardrails",
"name": "#guardrails"
},
{
"type": "Hashtag",
"href": "https://infosec.exchange/tags/Anthropic",
"name": "#Anthropic"
},
{
"type": "Hashtag",
"href": "https://infosec.exchange/tags/aisafety",
"name": "#aisafety"
},
{
"type": "Hashtag",
"href": "https://infosec.exchange/tags/ethicalai",
"name": "#ethicalai"
}
],
"replies": {
"id": "https://infosec.exchange/users/n_dimension/statuses/114033131842985599/replies",
"type": "Collection",
"first": {
"type": "CollectionPage",
"next": "https://infosec.exchange/users/n_dimension/statuses/114033131842985599/replies?min_id=114033179422220428&page=true",
"partOf": "https://infosec.exchange/users/n_dimension/statuses/114033131842985599/replies",
"items": [
"https://infosec.exchange/users/n_dimension/statuses/114033179422220428"
]
}
},
"likes": {
"id": "https://infosec.exchange/users/n_dimension/statuses/114033131842985599/likes",
"type": "Collection",
"totalItems": 2
},
"shares": {
"id": "https://infosec.exchange/users/n_dimension/statuses/114033131842985599/shares",
"type": "Collection",
"totalItems": 1
}
}