i think honestly what i want out of a storage solution is to like maybe upload stuff to some kind of object storage and then separately have a graph database or rdf quad store for the metadata.
-
@alice yeah this is likely because people just "know" json and how to work with it, but it's not at all efficient for large data stores lol
-
Eugenus Optimus 🇺🇦replied to infinite love ⴳ last edited by
@trwnh I dunno, you need a search over multiple instances of non-structured data which organized into hierarchy or mesh?
-
@tech_himbo the main use case is easier handling of the content as data. for the most part, content is just data meant for human consumption. but what we lack is convenience around managing content as data. think of how a CMS works for example. now say you have a friend who runs a different CMS. what's the easiest way to export/import some content from you to them? how do you "share" content? reuse? and so on.
having the atomic data makes it easier to handle, logic with, reason about, wrap up,
-
@tech_himbo so basically it amounts to operating at the same semantic level as your communication peers. if i say "hi" to you, the core of the message is "hi", but in order to get it across i need to wrap it in some document and then wrap that document in an HTTP message or whatever. that's a lot of overhead for literally just two characters!
-
@trwnh yea json exists as a cross-language serialization convenience all the way, mostly i mean that we went towards a plain text list of serialized atomic activities instead of handling collections. (now im thinking of a world where we export and import AP collections as zip archives of xml files along with their signatures)
-
@tech_himbo it compounds beyond that, too. obviously i am not the only person who has ever strung those two characters together in that order. but that string may be used by other people in other ways. in much the same way that bittorrent allows peers to split a file into packets and reconstruct it piecewise, the string "hi" may be simply a component in some other larger piece of data -- for example, the content "hi" can be paired with metadata showing that it came from:me and to:you
-
-
@alice AP collections are still a lil weird even conceptually (they have several flaws and bugs and are imo underspecced), but yes this is the general idea. it's all really just a graph merge at the end of the day. fedi doesn't even care about activities, it cares about the content (mostly Note but really Note.content primarily)
the other part of this is can you imagine stuffing an entire HTML Article into a JSON string? is that really a good idea. like really.
-
@trwnh doesn’t this double the problem, though? if CMS A uses HTML snippets, and CMS B uses markdown with a header, you just need to translate from HTML to markdown. but if A uses JSON-LD and plaintext, and B uses XML and RTF, then you have to do two translations. although maybe your proposal is to ban everything but plaintext — in which case, why make plaintext the standard over any other format?
-
@trwnh i would have put all that json into a <head> anyway
-
@tech_himbo not so much "make plaintext the standard" as much as it is "make the human meaningful thing the standard"
so i could write plaintext or i could write semantic html or i could write markdown or asciidoc or whatever. doesn't matter. the serialization is less important than the actual information being conveyed
so CMS A and CMS B don't need to agree on jsonld or xml, but they do need to agree on what an "article" is. they can then (separately) describe that "article" with metadata.
-
@tech_himbo i am for the most part using the RDF data model here because it's generic enough to allow describing basically anything in the form of a graph. sharing some metadata is just a graph merge. sharing the content should be as simple as just copying one file (which can be any format, text or binary)
-
@tech_himbo the proposal is basically "store content separately from metadata" and "have metadata link to content instead of inlining it in your canonical data format"
-
@trwnh so, they need to agree on a serialization format for articles, and we need a mapping from A’s metadata format to B’a metadata format to enable import/export. is that right?
-
@alice i would split the head off into a separate descriptor
but yes this is basically the issue here. you have head and body, the content is ideally the last remaining body after you unwrap all the layers and extract whatever profiles. you shouldn't be required to use any specific format or container just to pass some atomic content around (metadata optional)
-
@tech_himbo no, they need to agree on the *semantics* of the "content". serializations and formats can be anything, and can be negotiated between peers ("i understand a b c", "i understand c d e", "okay let's agree to use c for this session")
this is about the semantic content model basically
in practical terms say instead of sending you an entire HTML document i just sent you a single paragraph element or perhaps only its inner text
-
@trwnh ok, so the requirement is that both services must support a common format, even if that format isn’t a broad standard. is that right?
-
@tech_himbo yes, but also the most straightforward solution to content is to literally just pass it along 1:1 without any containers or metadata
the "problem" is essentially that, for something like an HTML document saved to disk as .html, we pre-bundle the content in the middle of a bunch of presentational stuff that is not content. or for a JSON document, we put an escaped string as the value of some key. i'm saying we don't need to always do that
-
... if you're going to export huge amounts of data, really huge - have the decency to write a reader and encoder for the data you're trying to preserve. When someone will , in the distant future, try to decode your trove, they'll appreciate your foresight.
-
@tuban_muzuru @erincandescent @alice in most cases "always bet on plain text" is good enough for that kind of thing imo. this is more about strategy and architecture of like... managing content. a sort of storage strategy, one that can handle abstract backends
it's probably going to look less like an sql database and a lot more like object storage in the end: the blob being the content (even if it's as simple as a literal string), and the metadata being whatever attribute-value pairs