ActivityPub Viewer

A small tool to view real-world ActivityPub objects as JSON! Enter a URL or username from Mastodon or a similar service below, and we'll send a request with the right Accept header to the server to view the underlying object.

Open in browser →
{ "@context": [ "https://www.w3.org/ns/activitystreams", "https://social.alternativebit.fr/schemas/litepub-0.1.jsonld", { "@language": "und" } ], "actor": "https://social.alternativebit.fr/users/picnoir", "attachment": [], "attributedTo": "https://social.alternativebit.fr/users/picnoir", "cc": [ "https://social.alternativebit.fr/users/picnoir/followers" ], "content": "I just discovered <a href=\"https://www.perplexity.ai/\" rel=\"ugc\">https://www.perplexity.ai/</a> . I&#39;m late to the LLM party, I know.<br><br>In very short, this is a sort of hybrid between a pure LLM (like chatgpt) and a search engine. You query it sort of like you&#39;d query a search engine. It spits up a response generated through a LLM and gives you pointers to the original posts that have been used to generate the answer.<br><br>I&#39;ve been using it most of last week, it&#39;s insanely useful for programming. This is a clear improvement over Google search.<br><br>Now the sad part: it&#39;s still SaaS. Under the hood, it uses the &quot;open source&quot; (training data not available, just the weights) Mistral LLM. On top of that, they add a in-house web index and a in-house LLM extension pointing you to the sources.<br><br>Now, imagine just a second if this tech was released through a FOSS license. You could run that locally, index and feed it your browsing history, personal notes, emails. You could then query all this data through natural language and have the LLM point you to what you&#39;re looking for.<br><br>- &quot;could you point me to the blog article I read a couple of years ago in which the author was proposing a 3 phase approach to review PRs&quot;<br>- &quot;could you point me to an email I received where the author was proposing an approach to concurently parse Nix code&quot;<br>- &quot;could you point me to a note I wrote when working at XXX in which I was explaining how to generate HTML forms using Servant&quot;<br><br>That would be soooooo good.<br><br>Now, realistically, this could be kept being SaaS doors and used to consolidate a monopolistic hold on your data by surveillance capitalists.<br><br>Energy consumption is also something that could become an issue. Training a LLMs requires a pretty big amount of power. Granted you don&#39;t need to train them that often. I assume indexing all this data before ingesting it to the LLM could also be power hungry. It&#39;d be really interesting to see the energetic impact of these now hybrid-LLM-search engine techs.", "context": "https://social.alternativebit.fr/contexts/fe559bf5-f42b-4dd2-8ebd-63925944c311", "conversation": "https://social.alternativebit.fr/contexts/fe559bf5-f42b-4dd2-8ebd-63925944c311", "id": "https://social.alternativebit.fr/objects/cdf10a8e-375a-4e17-adbd-bbb2684fe995", "published": "2024-01-29T12:23:44.707961Z", "repliesCount": 2, "sensitive": null, "source": { "content": "I just discovered https://www.perplexity.ai/ . I'm late to the LLM party, I know.\r\n\r\nIn very short, this is a sort of hybrid between a pure LLM (like chatgpt) and a search engine. You query it sort of like you'd query a search engine. It spits up a response generated through a LLM and gives you pointers to the original posts that have been used to generate the answer.\r\n\r\nI've been using it most of last week, it's insanely useful for programming. This is a clear improvement over Google search.\r\n\r\nNow the sad part: it's still SaaS. Under the hood, it uses the \"open source\" (training data not available, just the weights) Mistral LLM. On top of that, they add a in-house web index and a in-house LLM extension pointing you to the sources.\r\n\r\nNow, imagine just a second if this tech was released through a FOSS license. You could run that locally, index and feed it your browsing history, personal notes, emails. You could then query all this data through natural language and have the LLM point you to what you're looking for.\r\n\r\n- \"could you point me to the blog article I read a couple of years ago in which the author was proposing a 3 phase approach to review PRs\"\r\n- \"could you point me to an email I received where the author was proposing an approach to concurently parse Nix code\"\r\n- \"could you point me to a note I wrote when working at XXX in which I was explaining how to generate HTML forms using Servant\"\r\n\r\nThat would be soooooo good.\r\n\r\nNow, realistically, this could be kept being SaaS doors and used to consolidate a monopolistic hold on your data by surveillance capitalists.\r\n\r\nEnergy consumption is also something that could become an issue. Training a LLMs requires a pretty big amount of power. Granted you don't need to train them that often. I assume indexing all this data before ingesting it to the LLM could also be power hungry. It'd be really interesting to see the energetic impact of these now hybrid-LLM-search engine techs.", "mediaType": "text/plain" }, "summary": "", "tag": [], "to": [ "https://www.w3.org/ns/activitystreams#Public" ], "type": "Note" }