ActivityPub Viewer

A small tool to view real-world ActivityPub objects as JSON! Enter a URL or username from Mastodon or a similar service below, and we'll send a request with the right Accept header to the server to view the underlying object.

Open in browser →
{ "@context": [ "https://www.w3.org/ns/activitystreams", { "ostatus": "http://ostatus.org#", "atomUri": "ostatus:atomUri", "inReplyToAtomUri": "ostatus:inReplyToAtomUri", "conversation": "ostatus:conversation", "sensitive": "as:sensitive", "toot": "http://joinmastodon.org/ns#", "votersCount": "toot:votersCount", "Hashtag": "as:Hashtag" } ], "id": "https://micro.arda.pw/users/arda/statuses/114404594896288840", "type": "Note", "summary": null, "inReplyTo": null, "published": "2025-04-26T13:53:01Z", "url": "https://micro.arda.pw/@arda/114404594896288840", "attributedTo": "https://micro.arda.pw/users/arda", "to": [ "https://www.w3.org/ns/activitystreams#Public" ], "cc": [ "https://micro.arda.pw/users/arda/followers" ], "sensitive": false, "atomUri": "https://micro.arda.pw/users/arda/statuses/114404594896288840", "inReplyToAtomUri": null, "conversation": "tag:micro.arda.pw,2025-04-26:objectId=1554073:objectType=Conversation", "content": "<p>I tested a few LLMs locally, asking code and general questions on an M4 Mac Mini. I tried 14B or the closest parameter-sized models since I have 24GB RAM:</p><p>Deepseek R1-14B ended up being my favorite model overall.</p><p>Besides that, I didn’t have many alternatives with tool support anyway — mostly Llama 3.1-8B and Qwen Coder 2.5-14B. Both are decent, but Llama 3.1 feels slightly better to me at the moment.</p><p>If you have any suggestions, I&#39;d love to hear them!</p><p><a href=\"https://micro.arda.pw/tags/deepseek\" class=\"mention hashtag\" rel=\"tag\">#<span>deepseek</span></a> <a href=\"https://micro.arda.pw/tags/llm\" class=\"mention hashtag\" rel=\"tag\">#<span>llm</span></a> <a href=\"https://micro.arda.pw/tags/qwen\" class=\"mention hashtag\" rel=\"tag\">#<span>qwen</span></a> <a href=\"https://micro.arda.pw/tags/llama\" class=\"mention hashtag\" rel=\"tag\">#<span>llama</span></a></p>", "contentMap": { "en": "<p>I tested a few LLMs locally, asking code and general questions on an M4 Mac Mini. I tried 14B or the closest parameter-sized models since I have 24GB RAM:</p><p>Deepseek R1-14B ended up being my favorite model overall.</p><p>Besides that, I didn’t have many alternatives with tool support anyway — mostly Llama 3.1-8B and Qwen Coder 2.5-14B. Both are decent, but Llama 3.1 feels slightly better to me at the moment.</p><p>If you have any suggestions, I&#39;d love to hear them!</p><p><a href=\"https://micro.arda.pw/tags/deepseek\" class=\"mention hashtag\" rel=\"tag\">#<span>deepseek</span></a> <a href=\"https://micro.arda.pw/tags/llm\" class=\"mention hashtag\" rel=\"tag\">#<span>llm</span></a> <a href=\"https://micro.arda.pw/tags/qwen\" class=\"mention hashtag\" rel=\"tag\">#<span>qwen</span></a> <a href=\"https://micro.arda.pw/tags/llama\" class=\"mention hashtag\" rel=\"tag\">#<span>llama</span></a></p>" }, "updated": "2025-04-26T14:08:43Z", "attachment": [], "tag": [ { "type": "Hashtag", "href": "https://micro.arda.pw/tags/deepseek", "name": "#deepseek" }, { "type": "Hashtag", "href": "https://micro.arda.pw/tags/llm", "name": "#llm" }, { "type": "Hashtag", "href": "https://micro.arda.pw/tags/qwen", "name": "#qwen" }, { "type": "Hashtag", "href": "https://micro.arda.pw/tags/llama", "name": "#llama" } ], "replies": { "id": "https://micro.arda.pw/users/arda/statuses/114404594896288840/replies", "type": "Collection", "first": { "type": "CollectionPage", "next": "https://micro.arda.pw/users/arda/statuses/114404594896288840/replies?only_other_accounts=true&page=true", "partOf": "https://micro.arda.pw/users/arda/statuses/114404594896288840/replies", "items": [] } }, "likes": { "id": "https://micro.arda.pw/users/arda/statuses/114404594896288840/likes", "type": "Collection", "totalItems": 0 }, "shares": { "id": "https://micro.arda.pw/users/arda/statuses/114404594896288840/shares", "type": "Collection", "totalItems": 0 } }