A small tool to view real-world ActivityPub objects as JSON! Enter a URL
or username from Mastodon or a similar service below, and we'll send a
request with
the right
Accept
header
to the server to view the underlying object.
{
"@context": [
"https://www.w3.org/ns/activitystreams",
{
"ostatus": "http://ostatus.org#",
"atomUri": "ostatus:atomUri",
"inReplyToAtomUri": "ostatus:inReplyToAtomUri",
"conversation": "ostatus:conversation",
"sensitive": "as:sensitive",
"toot": "http://joinmastodon.org/ns#",
"votersCount": "toot:votersCount",
"litepub": "http://litepub.social/ns#",
"directMessage": "litepub:directMessage",
"blurhash": "toot:blurhash",
"focalPoint": {
"@container": "@list",
"@id": "toot:focalPoint"
},
"Hashtag": "as:Hashtag"
}
],
"id": "https://neuromatch.social/users/jonny/collections/featured",
"type": "OrderedCollection",
"totalItems": 2,
"orderedItems": [
{
"id": "https://neuromatch.social/users/jonny/statuses/112499381823948185",
"type": "Note",
"summary": null,
"inReplyTo": null,
"published": "2024-05-25T02:32:16Z",
"url": "https://neuromatch.social/@jonny/112499381823948185",
"attributedTo": "https://neuromatch.social/users/jonny",
"to": [
"https://www.w3.org/ns/activitystreams#Public"
],
"cc": [
"https://neuromatch.social/users/jonny/followers",
"https://fosstodon.org/users/pydantic",
"https://fosstodon.org/users/linkml"
],
"sensitive": false,
"atomUri": "https://neuromatch.social/users/jonny/statuses/112499381823948185",
"inReplyToAtomUri": null,
"conversation": "tag:neuromatch.social,2024-05-25:objectId=13406349:objectType=Conversation",
"content": "<p>Here's an ~ official ~ release announcement for <a href=\"https://neuromatch.social/tags/numpydantic\" class=\"mention hashtag\" rel=\"tag\">#<span>numpydantic</span></a></p><p>repo: <a href=\"https://github.com/p2p-ld/numpydantic\" target=\"_blank\" rel=\"nofollow noopener noreferrer\" translate=\"no\"><span class=\"invisible\">https://</span><span class=\"\">github.com/p2p-ld/numpydantic</span><span class=\"invisible\"></span></a><br>docs: <a href=\"https://numpydantic.readthedocs.io\" target=\"_blank\" rel=\"nofollow noopener noreferrer\" translate=\"no\"><span class=\"invisible\">https://</span><span class=\"\">numpydantic.readthedocs.io</span><span class=\"invisible\"></span></a></p><p>Problems: <span class=\"h-card\" translate=\"no\"><a href=\"https://fosstodon.org/@pydantic\" class=\"u-url mention\" rel=\"nofollow noopener noreferrer\" target=\"_blank\">@<span>pydantic</span></a></span> is great for modeling data!! but at the moment it doesn't support array data out of the box. Often array shape and dtype are as important as whether something is an array at all, but there isn't a good way to specify and validate that with the Python type system. Many data formats and standards couple their implementation very tightly with their schema, making them less flexible, less interoperable, and more difficult to maintain than they could be. The existing tools for parameterized array types like nptyping and jaxtyping tie their annotations to a specific array library, rather than allowing array specifications that can be abstract across implementations.</p><p><code>numpydantic</code> is a super small, few-dep, and well-tested package that provides generic array annotations for pydantic models. Specify an array along with its shape and dtype and then use that model with any array library you'd like! Extending support for new array libraries is just subclassing - no PRs or monkeypatching needed. The type has some magic under the hood that uses pydantic validators to give a uniform array interface to things that don't usually behave like arrays - pass a path to a video file, that's an array. pass a path to an HDF5 file and a nested array within it, that's an array. We take advantage of the rest of pydantic's features too, including generating rich JSON schema and smart array dumping.</p><p>This is a standalone part of my work with <span class=\"h-card\" translate=\"no\"><a href=\"https://fosstodon.org/@linkml\" class=\"u-url mention\" rel=\"nofollow noopener noreferrer\" target=\"_blank\">@<span>linkml</span></a></span> <a href=\"https://linkml.io/linkml/schemas/arrays.html\" target=\"_blank\" rel=\"nofollow noopener noreferrer\">arrays</a> and rearchitecting neurobio data formats like NWB to be dead simple to use and extend, integrating with the tools you already use and across the experimental process - specify your data in a simple <code>yaml</code> format, and get back high quality data modeling code that is standards-compliant out of the box and can be used with arbitrary backends. One step towards the wild exuberance of FAIR data that is just as comfortable in the scattered scripts of real experimental work as it is in carefully curated archives and high performance computing clusters. Longer term I'm trying to abstract away data store implementations to bring content-addressed p2p data stores right into the python interpreter as simply as if something was born in local memory. </p><p>plenty of <a href=\"https://numpydantic.readthedocs.io/en/latest/todo.html\" target=\"_blank\" rel=\"nofollow noopener noreferrer\">todos</a>, but hope ya like it.</p><p><a href=\"https://neuromatch.social/tags/linkml\" class=\"mention hashtag\" rel=\"tag\">#<span>linkml</span></a> <a href=\"https://neuromatch.social/tags/python\" class=\"mention hashtag\" rel=\"tag\">#<span>python</span></a> <a href=\"https://neuromatch.social/tags/NewWork\" class=\"mention hashtag\" rel=\"tag\">#<span>NewWork</span></a> <a href=\"https://neuromatch.social/tags/pydantic\" class=\"mention hashtag\" rel=\"tag\">#<span>pydantic</span></a> <a href=\"https://neuromatch.social/tags/ScientificSoftware\" class=\"mention hashtag\" rel=\"tag\">#<span>ScientificSoftware</span></a></p>",
"contentMap": {
"en": "<p>Here's an ~ official ~ release announcement for <a href=\"https://neuromatch.social/tags/numpydantic\" class=\"mention hashtag\" rel=\"tag\">#<span>numpydantic</span></a></p><p>repo: <a href=\"https://github.com/p2p-ld/numpydantic\" target=\"_blank\" rel=\"nofollow noopener noreferrer\" translate=\"no\"><span class=\"invisible\">https://</span><span class=\"\">github.com/p2p-ld/numpydantic</span><span class=\"invisible\"></span></a><br>docs: <a href=\"https://numpydantic.readthedocs.io\" target=\"_blank\" rel=\"nofollow noopener noreferrer\" translate=\"no\"><span class=\"invisible\">https://</span><span class=\"\">numpydantic.readthedocs.io</span><span class=\"invisible\"></span></a></p><p>Problems: <span class=\"h-card\" translate=\"no\"><a href=\"https://fosstodon.org/@pydantic\" class=\"u-url mention\" rel=\"nofollow noopener noreferrer\" target=\"_blank\">@<span>pydantic</span></a></span> is great for modeling data!! but at the moment it doesn't support array data out of the box. Often array shape and dtype are as important as whether something is an array at all, but there isn't a good way to specify and validate that with the Python type system. Many data formats and standards couple their implementation very tightly with their schema, making them less flexible, less interoperable, and more difficult to maintain than they could be. The existing tools for parameterized array types like nptyping and jaxtyping tie their annotations to a specific array library, rather than allowing array specifications that can be abstract across implementations.</p><p><code>numpydantic</code> is a super small, few-dep, and well-tested package that provides generic array annotations for pydantic models. Specify an array along with its shape and dtype and then use that model with any array library you'd like! Extending support for new array libraries is just subclassing - no PRs or monkeypatching needed. The type has some magic under the hood that uses pydantic validators to give a uniform array interface to things that don't usually behave like arrays - pass a path to a video file, that's an array. pass a path to an HDF5 file and a nested array within it, that's an array. We take advantage of the rest of pydantic's features too, including generating rich JSON schema and smart array dumping.</p><p>This is a standalone part of my work with <span class=\"h-card\" translate=\"no\"><a href=\"https://fosstodon.org/@linkml\" class=\"u-url mention\" rel=\"nofollow noopener noreferrer\" target=\"_blank\">@<span>linkml</span></a></span> <a href=\"https://linkml.io/linkml/schemas/arrays.html\" target=\"_blank\" rel=\"nofollow noopener noreferrer\">arrays</a> and rearchitecting neurobio data formats like NWB to be dead simple to use and extend, integrating with the tools you already use and across the experimental process - specify your data in a simple <code>yaml</code> format, and get back high quality data modeling code that is standards-compliant out of the box and can be used with arbitrary backends. One step towards the wild exuberance of FAIR data that is just as comfortable in the scattered scripts of real experimental work as it is in carefully curated archives and high performance computing clusters. Longer term I'm trying to abstract away data store implementations to bring content-addressed p2p data stores right into the python interpreter as simply as if something was born in local memory. </p><p>plenty of <a href=\"https://numpydantic.readthedocs.io/en/latest/todo.html\" target=\"_blank\" rel=\"nofollow noopener noreferrer\">todos</a>, but hope ya like it.</p><p><a href=\"https://neuromatch.social/tags/linkml\" class=\"mention hashtag\" rel=\"tag\">#<span>linkml</span></a> <a href=\"https://neuromatch.social/tags/python\" class=\"mention hashtag\" rel=\"tag\">#<span>python</span></a> <a href=\"https://neuromatch.social/tags/NewWork\" class=\"mention hashtag\" rel=\"tag\">#<span>NewWork</span></a> <a href=\"https://neuromatch.social/tags/pydantic\" class=\"mention hashtag\" rel=\"tag\">#<span>pydantic</span></a> <a href=\"https://neuromatch.social/tags/ScientificSoftware\" class=\"mention hashtag\" rel=\"tag\">#<span>ScientificSoftware</span></a></p>"
},
"attachment": [
{
"type": "Document",
"mediaType": "image/png",
"url": "https://media.neuromatch.social/media_attachments/files/112/499/343/871/274/862/original/7111010d7847d8ef.png",
"name": "[This and the following images aren't very screen reader friendly with a lot of code in them. I'll describe what's going on in brackets and then put the text below.\n\nIn this image: a demonstration of the basic usage of numpydantic, declaring an \"array\" field on a pydantic model with an NDArray class with a shape and dtype specification. The model can then be used with a number of different array libraries and data formats, including validation.]\n\nNumpydantic allows you to do this:\n\nfrom pydantic import BaseModel\nfrom numpydantic import NDArray, Shape\n\nclass MyModel(BaseModel):\n array: NDArray[Shape[\"3 x, 4 y, * z\"], int]\n\nAnd use it with your favorite array library:\n\nimport numpy as np\nimport dask.array as da\nimport zarr\n\n# numpy\nmodel = MyModel(array=np.zeros((3, 4, 5), dtype=int))\n# dask\nmodel = MyModel(array=da.zeros((3, 4, 5), dtype=int))\n# hdf5 datasets\nmodel = MyModel(array=('data.h5', '/nested/dataset'))\n# zarr arrays\nmodel = MyModel(array=zarr.zeros((3,4,5), dtype=int))\nmodel = MyModel(array='data.zarr')\nmodel = MyModel(array=('data.zarr', '/nested/dataset'))\n# video files\nmodel = MyModel(array=\"data.mp4\")\n\n\n\n",
"blurhash": "U14LUY_NNaNFEMX8oJRjxYs;WFRkxujtjFs;",
"focalPoint": [
0,
0
],
"width": 1128,
"height": 1124
},
{
"type": "Document",
"mediaType": "image/png",
"url": "https://media.neuromatch.social/media_attachments/files/112/499/361/475/260/219/original/606340afc7d4bd8c.png",
"name": "[Further demonstration of validation and array expression, where a Union of NDArray specifications can specify a more complex data type - eg. an image that can be any shape in x and y, an RGB image, or a specific resolution of a video, each with independently checked dtypes]\n\nFor example, to specify a very special type of image that can either be\n\n a 2D float array where the axes can be any size, or\n\n a 3D uint8 array where the third axis must be size 3\n\n a 1080p video\n\nfrom typing import Union\nfrom pydantic import BaseModel\nimport numpy as np\n\nfrom numpydantic import NDArray, Shape\n\nclass Image(BaseModel):\n array: Union[\n NDArray[Shape[\"* x, * y\"], float],\n NDArray[Shape[\"* x, * y, 3 rgb\"], np.uint8],\n NDArray[Shape[\"* t, 1080 y, 1920 x, 3 rgb\"], np.uint8]\n ]\n\nAnd then use that as a transparent interface to your favorite array library!\nInterfaces\nNumpy\n\nThe Coca-Cola of array libraries\n\nimport numpy as np\n# works\nframe_gray = Image(array=np.ones((1280, 720), dtype=float))\nframe_rgb = Image(array=np.ones((1280, 720, 3), dtype=np.uint8))\n\n# fails\nwrong_n_dimensions = Image(array=np.ones((1280,), dtype=float))\nwrong_shape = Image(array=np.ones((1280,720,10), dtype=np.uint8))\n\n# shapes and types are checked together, so this also fails\nwrong_shape_dtype_combo = Image(array=np.ones((1280, 720, 3), dtype=float))\n\n",
"blurhash": "U14ef@_MWAW;M_kCkXR*tSWCROV@RQt7t7t6",
"focalPoint": [
0,
0
],
"width": 1408,
"height": 1472
},
{
"type": "Document",
"mediaType": "image/png",
"url": "https://media.neuromatch.social/media_attachments/files/112/499/363/179/526/128/original/283484ca6e6f3ba9.png",
"name": "[Demonstration of usage outside of pydantic as just a normal python type - you can validate an array against a specification by checking it the array is an instance of the array specification type]\n\nAnd use the NDArray type annotation like a regular type outside of pydantic – eg. to validate an array anywhere, use isinstance:\n\narray_type = NDArray[Shape[\"1, 2, 3\"], int]\nisinstance(np.zeros((1,2,3), dtype=int), array_type)\n# True\nisinstance(zarr.zeros((1,2,3), dtype=int), array_type)\n# True\nisinstance(np.zeros((4,5,6), dtype=int), array_type)\n# False\nisinstance(np.zeros((1,2,3), dtype=float), array_type)\n# False\n\n",
"blurhash": "U14xrU.99Ft6?HxZRkj]emaKs;s;%NWVWAt7",
"focalPoint": [
0,
0
],
"width": 1128,
"height": 562
},
{
"type": "Document",
"mediaType": "image/png",
"url": "https://media.neuromatch.social/media_attachments/files/112/499/363/584/367/121/original/176de6d43fde919e.png",
"name": "[Demonstration of JSON schema generation using the sort of odd case of an array with a specific dtype but an arbitrary shape. It has to use a recursive JSON schema definition, where the items of a given JSON array can either be the innermost dtype or another instance of that same array. Since JSON Schema doesn't support extended dtypes like 8-bit integers, we encode that information as maximum and minimum constraints on the `integer` class and add it in the schema metadata. Since pydantic renders all recursive schemas like this in the same $defs block, we use a blake2b hash against the dtype specification to keep them deduplicated.]\n\nnumpydantic can even handle shapes with unbounded numbers of dimensions by using recursive JSON schema!!!\n\nSo the any-shaped array (using nptyping’s ellipsis notation):\n\nclass AnyShape(BaseModel):\n array: NDArray[Shape[\"*, ...\"], np.uint8]\n\nis rendered to JSON-Schema like this:\n\n{\n \"$defs\": {\n \"any-shape-array-9b5d89838a990d79\": {\n \"anyOf\": [\n {\n \"items\": {\n \"$ref\": \"#/$defs/any-shape-array-9b5d89838a990d79\"\n },\n \"type\": \"array\"\n },\n {\"maximum\": 255, \"minimum\": 0, \"type\": \"integer\"}\n ]\n }\n },\n \"properties\": {\n \"array\": {\n \"dtype\": \"numpy.uint8\",\n \"items\": {\"$ref\": \"#/$defs/any-shape-array-9b5d89838a990d79\"},\n \"title\": \"Array\",\n \"type\": \"array\"\n }\n },\n \"required\": [\"array\"],\n \"title\": \"AnyShape\",\n \"type\": \"object\"\n}\n\n",
"blurhash": "U04B,v?[E1M}NfxvRQi__4$lRitQnOt7brSe",
"focalPoint": [
0,
0
],
"width": 1122,
"height": 1698
}
],
"tag": [
{
"type": "Mention",
"href": "https://fosstodon.org/users/pydantic",
"name": "@pydantic@fosstodon.org"
},
{
"type": "Mention",
"href": "https://fosstodon.org/users/linkml",
"name": "@linkml@fosstodon.org"
},
{
"type": "Hashtag",
"href": "https://neuromatch.social/tags/numpydantic",
"name": "#numpydantic"
},
{
"type": "Hashtag",
"href": "https://neuromatch.social/tags/linkml",
"name": "#linkml"
},
{
"type": "Hashtag",
"href": "https://neuromatch.social/tags/python",
"name": "#python"
},
{
"type": "Hashtag",
"href": "https://neuromatch.social/tags/newwork",
"name": "#newwork"
},
{
"type": "Hashtag",
"href": "https://neuromatch.social/tags/pydantic",
"name": "#pydantic"
},
{
"type": "Hashtag",
"href": "https://neuromatch.social/tags/scientificsoftware",
"name": "#scientificsoftware"
}
],
"replies": {
"id": "https://neuromatch.social/users/jonny/statuses/112499381823948185/replies",
"type": "Collection",
"first": {
"type": "CollectionPage",
"next": "https://neuromatch.social/users/jonny/statuses/112499381823948185/replies?min_id=113190012700401370&page=true",
"partOf": "https://neuromatch.social/users/jonny/statuses/112499381823948185/replies",
"items": [
"https://neuromatch.social/users/jonny/statuses/112499442268431441",
"https://neuromatch.social/users/jonny/statuses/113190012700401370"
]
}
},
"likes": {
"id": "https://neuromatch.social/users/jonny/statuses/112499381823948185/likes",
"type": "Collection",
"totalItems": 43
},
"shares": {
"id": "https://neuromatch.social/users/jonny/statuses/112499381823948185/shares",
"type": "Collection",
"totalItems": 25
}
},
{
"id": "https://neuromatch.social/users/jonny/statuses/110346670913611962",
"type": "Note",
"summary": null,
"inReplyTo": null,
"published": "2023-05-10T22:09:35Z",
"url": "https://neuromatch.social/@jonny/110346670913611962",
"attributedTo": "https://neuromatch.social/users/jonny",
"to": [
"https://www.w3.org/ns/activitystreams#Public"
],
"cc": [
"https://neuromatch.social/users/jonny/followers"
],
"sensitive": false,
"atomUri": "https://neuromatch.social/users/jonny/statuses/110346670913611962",
"inReplyToAtomUri": null,
"conversation": "tag:neuromatch.social,2023-05-10:objectId=2856986:objectType=Conversation",
"content": "<p>Glad to formally release my latest work - Surveillance Graphs: Vulgarity and Cloud Orthodoxy in Linked Data Infrastructures. </p><p>web: <a href=\"https://jon-e.net/surveillance-graphs\" target=\"_blank\" rel=\"nofollow noopener noreferrer\" translate=\"no\"><span class=\"invisible\">https://</span><span class=\"\">jon-e.net/surveillance-graphs</span><span class=\"invisible\"></span></a><br>hcommons: <a href=\"https://doi.org/10.17613/syv8-cp10\" target=\"_blank\" rel=\"nofollow noopener noreferrer\" translate=\"no\"><span class=\"invisible\">https://</span><span class=\"\">doi.org/10.17613/syv8-cp10</span><span class=\"invisible\"></span></a></p><p>A bit of an overview and then I'll get into some of the more specific arguments in a thread:</p><p>This piece is in three parts: </p><p>First I trace the mutation of the liberatory ambitions of the <a href=\"https://neuromatch.social/tags/SemanticWeb\" class=\"mention hashtag\" rel=\"tag\">#<span>SemanticWeb</span></a> into <a href=\"https://neuromatch.social/tags/KnowledgeGraphs\" class=\"mention hashtag\" rel=\"tag\">#<span>KnowledgeGraphs</span></a>, an underappreciated component in the architecture of <a href=\"https://neuromatch.social/tags/SurveillanceCapitalism\" class=\"mention hashtag\" rel=\"tag\">#<span>SurveillanceCapitalism</span></a>. This mutation plays out against the backdrop of the broader platform capture of the web, rendering us as consumer-users of information <em>services</em> rather than empowered people communicating over informational <em>protocols.</em> </p><p>I then show how this platform logic influences two contemporary public information infrastructure projects: the NIH's Biomedical Data Translator and the NSF's Open Knowledge Network. I argue that projects like these, while well intentioned, demonstrate the fundamental limitations of platformatized public infrastructure and create new capacities for harm by their enmeshment in and inevitable capture by information conglomerates. The dream of a seamless \"knowledge graph of everything\" is unlikely to deliver on the utopian promises made by techno-solutionists, but they do create new opportunities for algorithmic oppression -- automated conversion therapy, predictive policing, abuse of bureacracy in \"smart cities,\" etc. Given the framing of corporate knowledge graphs, these projects are poised to create facilitating technologies (that the info conglomerates write about needing themselves) for a new kind of interoperable corporate data infrastructure, where a gradient of public to private information is traded between \"open\" and quasi-proprietary knowledge graphs to power derivative platforms and services. </p><p>When approaching \"AI\" from the perspective of the semantic web and knowledge graphs, it becomes apparent that the new generation of <a href=\"https://neuromatch.social/tags/LLMs\" class=\"mention hashtag\" rel=\"tag\">#<span>LLMs</span></a> are intended to serve as <em>interfaces to knowledge graphs.</em> These \"augmented language models\" are joint systems that combine a language model as a means of interacting with some underlying knowledge graph, integrated in multiple places in the computing ecosystem: eg. mobile apps, assistants, search, and enterprise platforms. I concretize and extend prior criticism about the capacity for LLMs to concentrate power by capturing access to information in increasingly isolated platforms and expand surveillance by creating the demand for extended personalized data graphs across multiple systems from home surveillance to your workplace, medical, and governmental data.</p><p>I pose Vulgar Linked Data as an alternative to the infrastructural pattern I call the Cloud Orthodoxy: rather than platforms operated by an informational priesthood, reorienting our public infrastructure efforts to support vernacular expression across heterogeneous <a href=\"https://neuromatch.social/tags/p2p\" class=\"mention hashtag\" rel=\"tag\">#<span>p2p</span></a> mediums. This piece extends a prior work of mine: <a href=\"https://jon-e.net/infrastructure\" target=\"_blank\" rel=\"nofollow noopener noreferrer\">Decentralized Infrastructure for (Neuro)science)</a> which has more complete draft of what that might look like. </p><p>(I don't think you can pre-write threads on masto, so i'll post some thoughts as I write them under this) /1</p><p><a href=\"https://neuromatch.social/tags/SurveillanceGraphs\" class=\"mention hashtag\" rel=\"tag\">#<span>SurveillanceGraphs</span></a></p>",
"contentMap": {
"en": "<p>Glad to formally release my latest work - Surveillance Graphs: Vulgarity and Cloud Orthodoxy in Linked Data Infrastructures. </p><p>web: <a href=\"https://jon-e.net/surveillance-graphs\" target=\"_blank\" rel=\"nofollow noopener noreferrer\" translate=\"no\"><span class=\"invisible\">https://</span><span class=\"\">jon-e.net/surveillance-graphs</span><span class=\"invisible\"></span></a><br>hcommons: <a href=\"https://doi.org/10.17613/syv8-cp10\" target=\"_blank\" rel=\"nofollow noopener noreferrer\" translate=\"no\"><span class=\"invisible\">https://</span><span class=\"\">doi.org/10.17613/syv8-cp10</span><span class=\"invisible\"></span></a></p><p>A bit of an overview and then I'll get into some of the more specific arguments in a thread:</p><p>This piece is in three parts: </p><p>First I trace the mutation of the liberatory ambitions of the <a href=\"https://neuromatch.social/tags/SemanticWeb\" class=\"mention hashtag\" rel=\"tag\">#<span>SemanticWeb</span></a> into <a href=\"https://neuromatch.social/tags/KnowledgeGraphs\" class=\"mention hashtag\" rel=\"tag\">#<span>KnowledgeGraphs</span></a>, an underappreciated component in the architecture of <a href=\"https://neuromatch.social/tags/SurveillanceCapitalism\" class=\"mention hashtag\" rel=\"tag\">#<span>SurveillanceCapitalism</span></a>. This mutation plays out against the backdrop of the broader platform capture of the web, rendering us as consumer-users of information <em>services</em> rather than empowered people communicating over informational <em>protocols.</em> </p><p>I then show how this platform logic influences two contemporary public information infrastructure projects: the NIH's Biomedical Data Translator and the NSF's Open Knowledge Network. I argue that projects like these, while well intentioned, demonstrate the fundamental limitations of platformatized public infrastructure and create new capacities for harm by their enmeshment in and inevitable capture by information conglomerates. The dream of a seamless \"knowledge graph of everything\" is unlikely to deliver on the utopian promises made by techno-solutionists, but they do create new opportunities for algorithmic oppression -- automated conversion therapy, predictive policing, abuse of bureacracy in \"smart cities,\" etc. Given the framing of corporate knowledge graphs, these projects are poised to create facilitating technologies (that the info conglomerates write about needing themselves) for a new kind of interoperable corporate data infrastructure, where a gradient of public to private information is traded between \"open\" and quasi-proprietary knowledge graphs to power derivative platforms and services. </p><p>When approaching \"AI\" from the perspective of the semantic web and knowledge graphs, it becomes apparent that the new generation of <a href=\"https://neuromatch.social/tags/LLMs\" class=\"mention hashtag\" rel=\"tag\">#<span>LLMs</span></a> are intended to serve as <em>interfaces to knowledge graphs.</em> These \"augmented language models\" are joint systems that combine a language model as a means of interacting with some underlying knowledge graph, integrated in multiple places in the computing ecosystem: eg. mobile apps, assistants, search, and enterprise platforms. I concretize and extend prior criticism about the capacity for LLMs to concentrate power by capturing access to information in increasingly isolated platforms and expand surveillance by creating the demand for extended personalized data graphs across multiple systems from home surveillance to your workplace, medical, and governmental data.</p><p>I pose Vulgar Linked Data as an alternative to the infrastructural pattern I call the Cloud Orthodoxy: rather than platforms operated by an informational priesthood, reorienting our public infrastructure efforts to support vernacular expression across heterogeneous <a href=\"https://neuromatch.social/tags/p2p\" class=\"mention hashtag\" rel=\"tag\">#<span>p2p</span></a> mediums. This piece extends a prior work of mine: <a href=\"https://jon-e.net/infrastructure\" target=\"_blank\" rel=\"nofollow noopener noreferrer\">Decentralized Infrastructure for (Neuro)science)</a> which has more complete draft of what that might look like. </p><p>(I don't think you can pre-write threads on masto, so i'll post some thoughts as I write them under this) /1</p><p><a href=\"https://neuromatch.social/tags/SurveillanceGraphs\" class=\"mention hashtag\" rel=\"tag\">#<span>SurveillanceGraphs</span></a></p>"
},
"updated": "2023-05-10T22:11:15Z",
"attachment": [
{
"type": "Document",
"mediaType": "image/png",
"url": "https://media.neuromatch.social/media_attachments/files/110/346/650/954/847/550/original/7b927efb6df758d5.png",
"name": "Surveillance Graphs\nVulgarity and Cloud Orthodoxy in Linked Data Infrastructures\n\n[an image of an eye against a grid is to the right of a column of text]\n\n Jonny L. Saunders\nUCLA - Department of Neurology, Institute of Pirate Technology\nFirst Published: 2023-05-02 \n\nInformation is power, and that power has been largely enclosed by a handful of information conglomerates. The logic of the surveillance-driven information economy demands systems for handling mass quantities of heterogeneous data, increasingly in the form of knowledge graphs. An archaeology of knowledge graphs and their mutation from the liberatory aspirations of the semantic web gives us an underexplored lens to understand contemporary information systems. I explore how the ideology of cloud systems steers two projects from the NIH and NSF intended to build information infrastructures for the public good to inevitable corporate capture, facilitating the development of a new kind of multilayered public/private surveillance system in the process. I argue that understanding technologies like large language models as interfaces to knowledge graphs is critical to understand their role in a larger project of informational enclosure and concentration of power. I draw from multiple histories of liberatory information technologies to develop Vulgar Linked Data as an alternative to the Cloud Orthodoxy, resisting the colonial urge for universality in favor of vernacular expression in peer to peer systems.",
"blurhash": "U98;V?00~q00j[ofofWB00~q4n-;t7j[WBWB",
"focalPoint": [
0,
0
],
"width": 1797,
"height": 1154
}
],
"tag": [
{
"type": "Hashtag",
"href": "https://neuromatch.social/tags/semanticweb",
"name": "#semanticweb"
},
{
"type": "Hashtag",
"href": "https://neuromatch.social/tags/knowledgegraphs",
"name": "#knowledgegraphs"
},
{
"type": "Hashtag",
"href": "https://neuromatch.social/tags/surveillancecapitalism",
"name": "#surveillancecapitalism"
},
{
"type": "Hashtag",
"href": "https://neuromatch.social/tags/llms",
"name": "#llms"
},
{
"type": "Hashtag",
"href": "https://neuromatch.social/tags/p2p",
"name": "#p2p"
},
{
"type": "Hashtag",
"href": "https://neuromatch.social/tags/surveillancegraphs",
"name": "#surveillancegraphs"
}
],
"replies": {
"id": "https://neuromatch.social/users/jonny/statuses/110346670913611962/replies",
"type": "Collection",
"first": {
"type": "CollectionPage",
"next": "https://neuromatch.social/users/jonny/statuses/110346670913611962/replies?min_id=110346734995563276&page=true",
"partOf": "https://neuromatch.social/users/jonny/statuses/110346670913611962/replies",
"items": [
"https://neuromatch.social/users/jonny/statuses/110346734995563276"
]
}
},
"likes": {
"id": "https://neuromatch.social/users/jonny/statuses/110346670913611962/likes",
"type": "Collection",
"totalItems": 86
},
"shares": {
"id": "https://neuromatch.social/users/jonny/statuses/110346670913611962/shares",
"type": "Collection",
"totalItems": 70
}
}
]
}