i think honestly what i want out of a storage solution is to like maybe upload stuff to some kind of object storage and then separately have a graph database or rdf quad store for the metadata.
-
the area where my thinking seems to differ from most of what i've seen so far is that i don't make a real distinction between text and binary anymore. content is content. so this implies that the object storage is not just for media, but for everything, including text files. but the thing is that content should be *strictly content*. what most people get "wrong" with HTML for example is that they require an entire structural wrapper, which is itself wrapping not just a body but also a header
-
entirely natural, of course -- HTML is most often used to serialize documents that are browsed on the Web. but for the most part, that's all presentational stuff! if you removed it, you would lose aesthetics, but the core of the message is only some certain *part*, that can be extracted and handled on its own. the rest of it is just part of the *view* or *(re)presentation*.
i've talked before about how you can combine body content with header metadata and get a document that can itself be body,
-
and you can of course unwrap those layers of containers of head+body as well. when there's nothing left to unwrap or extract, that leaves you with some base atomic content, which may be just some plain text.
i argue that this is not meaningfully different than, say, a png file. it's just that text is often *inlined* into the presentation layer. we don't go about browsing the metadata and then linking to the content, we just stick the content directly into the same view as the metadata...
-
but just as we can inline text content, we can also separate it and link to it, we can stick it into object storage and address it as that literal text. and we can then arbitrarily package and wrap it into whatever containers or as many layers of containers as we care to do so. we could take some plaintext content and wrap it in some HTML template that itself gets wrapped in what eventually becomes an HTML document which itself gets wrapped in an HTTP message. this is more or less how we do rn.
-
the takeaway really, is that if we want the content to be replicable and manageable by some user(s), then it needs to be put into some data store, and it is exactly that data store that i am trying to model. the data model needs to support lossless serialization into whatever convenient format some person might ask for or need. i don't think "sql data dump" is an ideal export format, and i don't think "a really big json file" is ideal either. i want something that can be split up into atoms.
-
@trwnh its funny to me how we often reply to this by not the big json file, because it's impractical and a deeper structure would be even worse, but json-lines of individual activities
-
currently, my answer is "object storage to store the content + metadata store to describe the logical objects"
is there a better way? idk, but that's it, that's all i got rn
the object storage can be serialized to a filesystem of arbitrary files (even just text files), and the metadata store can be serialized to a bunch of RDF graphs (filesystem hierarchy mostly irrelevant but yeah you can just import an entire folder and subtree)
this seems convenient enough?
-
@trwnh what does splitting things into atoms buy us? it sounds nice conceptually, but im not sure i understand the use-case
-
@alice yeah this is likely because people just "know" json and how to work with it, but it's not at all efficient for large data stores lol
-
Eugenus Optimus 🇺🇦replied to infinite love ⴳ last edited by
@trwnh I dunno, you need a search over multiple instances of non-structured data which organized into hierarchy or mesh?
-
@tech_himbo the main use case is easier handling of the content as data. for the most part, content is just data meant for human consumption. but what we lack is convenience around managing content as data. think of how a CMS works for example. now say you have a friend who runs a different CMS. what's the easiest way to export/import some content from you to them? how do you "share" content? reuse? and so on.
having the atomic data makes it easier to handle, logic with, reason about, wrap up,
-
@tech_himbo so basically it amounts to operating at the same semantic level as your communication peers. if i say "hi" to you, the core of the message is "hi", but in order to get it across i need to wrap it in some document and then wrap that document in an HTTP message or whatever. that's a lot of overhead for literally just two characters!
-
@trwnh yea json exists as a cross-language serialization convenience all the way, mostly i mean that we went towards a plain text list of serialized atomic activities instead of handling collections. (now im thinking of a world where we export and import AP collections as zip archives of xml files along with their signatures)
-
@tech_himbo it compounds beyond that, too. obviously i am not the only person who has ever strung those two characters together in that order. but that string may be used by other people in other ways. in much the same way that bittorrent allows peers to split a file into packets and reconstruct it piecewise, the string "hi" may be simply a component in some other larger piece of data -- for example, the content "hi" can be paired with metadata showing that it came from:me and to:you
-
-
@alice AP collections are still a lil weird even conceptually (they have several flaws and bugs and are imo underspecced), but yes this is the general idea. it's all really just a graph merge at the end of the day. fedi doesn't even care about activities, it cares about the content (mostly Note but really Note.content primarily)
the other part of this is can you imagine stuffing an entire HTML Article into a JSON string? is that really a good idea. like really.
-
@trwnh doesn’t this double the problem, though? if CMS A uses HTML snippets, and CMS B uses markdown with a header, you just need to translate from HTML to markdown. but if A uses JSON-LD and plaintext, and B uses XML and RTF, then you have to do two translations. although maybe your proposal is to ban everything but plaintext — in which case, why make plaintext the standard over any other format?
-
@trwnh i would have put all that json into a <head> anyway
-
@tech_himbo not so much "make plaintext the standard" as much as it is "make the human meaningful thing the standard"
so i could write plaintext or i could write semantic html or i could write markdown or asciidoc or whatever. doesn't matter. the serialization is less important than the actual information being conveyed
so CMS A and CMS B don't need to agree on jsonld or xml, but they do need to agree on what an "article" is. they can then (separately) describe that "article" with metadata.
-
@tech_himbo i am for the most part using the RDF data model here because it's generic enough to allow describing basically anything in the form of a graph. sharing some metadata is just a graph merge. sharing the content should be as simple as just copying one file (which can be any format, text or binary)