@shoq @hrheingold 2/2 your post also prompted me to think more about Twitter and journalism specifically, and whether there are ways in which we should welcome a shift away from what news and journalism became with Twitter (for example, it made me realise that I hadn‘t for a while read any of these somewhat self-referential news stories that emerged in the last 10 to 15 years that are just about what people are saying on Twitter.)
Posts
-
For all that it did right, Mastodon made a massive unforced error not realizing that the key to social media propagation was keeping journalists happy. -
For all that it did right, Mastodon made a massive unforced error not realizing that the key to social media propagation was keeping journalists happy.@shoq @hrheingold I think I agree: my personal experience of the great Mastodon migration and subsequent moving on of some people to Bluesky is that people, culturally, want sufficiently different things out of online social networks that the best solution is to genuinely have distinct, smaller communitie. And that‘s just one reason why one might think mega-large social networks aren‘t necessarily a great thing. 1/2
-
What the hell is "superalignment"?2/2
of course, someone wrote that programme, but once there, it's the programme itself that determines the outcomes of its computations.
To think that 'TESCREAL' tells us something about what gen AI models can and cannot do requires one to believe that an ethical question about utilitarianism such as 'how much should we weight future lives relative to current lives, aka 'longtermism') somehow has a counterpart in the actual code of LLMs/genAI systems.
-
What the hell is "superalignment"?no worries, Catherine! I'm just going to add this for another day (or anyone else that has wandered into this thread).
It seems uncontroversial to me to believe that the performance of computer programme (what outputs it produces for which inputs) is determined wholly by the programme itself, not by any feelings, intentions or beliefs I have before, during or after its execution.
1/2
-
What the hell is "superalignment"?there are many questions for which knowing about TESCREAL might be informative (should I trust these people, do I want this product?), but for answering questions about "what are the capabilities of this system?" or "what is the future potential?" it doesn't seem causally relevant at all.
-
What the hell is "superalignment"?"their philosophy" -who is "they"?
the technical development of LLMs/gen AI has been (and is being) driven by tens of thousands of researchers world wide.
but setting that aside, appealing to TESCREAL to answer the question of whether genAI can become superintelligent feels like trying to answer the question of whether a VW Beetle can go faster than 120km by pointing out that the VW Beetle was a project pushed by Hitler.
it feels like a category error to me
-
What the hell is "superalignment"?@CatherineFlick @jeffjarvis but I‘m now also bewildered by what „TESCREAL“ and its putative roots in eugenics is doing here. By what causal model of the world would a set of values determine whether AGI is empirically possible or not?
where is TESCREAL in the way genAI systems work or what they can do?
-
What the hell is "superalignment"?@CatherineFlick @jeffjarvis but nobody is denying that we should be monitoring current risk, we all agree on that! It’s simply not what the superalignment unit (or the linked Guardian article) was about
literally nothing follows from the presence or absence of that unit with respect to the monitoring of current risk, and it would seem to me like an out and out fallacy (‚false dilemma‘) to assume that it did.
-
What the hell is "superalignment"?@jeffjarvis @CatherineFlick it‘s specifically a unit that was tasked with thinking about *future risk* so it seems odd to criticise it for not investigating current risk.
as you mention the stochastic parrots paper, you might (or might not!) find this interesting: https://write.as/ulrikehahn/stochastic-parrot-is-a-misleading-metaphor-for-llms
-
What the hell is "superalignment"?@CatherineFlick @jeffjarvis „were thinking about“…sounds like the team has been disbanded….
-
What the hell is "superalignment"?@CatherineFlick @jeffjarvis Catherine, what is the evidence that they believe they can do it all beforehand? and, of course you need to be monitoring, but I take the superalignment brief to be specifically about something we patently don‘t have yet: superhuman intelligence. So its impact isn‘t something that could currently be monitored.
-
For all that it did right, Mastodon made a massive unforced error not realizing that the key to social media propagation was keeping journalists happy.@shoq @hrheingold from what I see on Threads there is no candy being thrown: There seems to be a lot of complaining about how the algorithm (driven by Meta‘s general stance on news) is hostile to journalism. And Bluesky seems to be gradually fading away….
It feels to me more like nowhere has provided an alternative to Twitter that journalists en masse want to adopt.
-
What the hell is "superalignment"?I don‘t understand this critique. If you are trying to build something that is intended to transformatively go beyond what we have ever had before, *shouldn‘t* you think about the safety of that *before* you put it into the world?
-
just came across a helpful review of the conceptual issues involved in studying deception and manipulation in AI systems. It does a really good job of going through extant definitions of manipulation and deception and drawing out how they might (or mig...conclusions: „the deployment of opaque
and increasingly autonomous systems heightens the importance of
a conception of manipulation that can account for manipulation
occurring without designer intent. Such manipulation could emerge
because it is favoured under the training objective (such as engage-
ment maximization in certain content recommendation settings),
or because a model learns to imitate manipulative behavior in its
training data (such as manipulative text in language modeling)“ -
just came across a helpful review of the conceptual issues involved in studying deception and manipulation in AI systems. It does a really good job of going through extant definitions of manipulation and deception and drawing out how they might (or mig...just came across a helpful review of the conceptual issues involved in studying deception and manipulation in AI systems. It does a really good job of going through extant definitions of manipulation and deception and drawing out how they might (or might not) apply to computational systems, making it useful for anyone interested in manipulation, persuasion, deception, coercion, whether in human or non-human agents #LLM
-
"Nothing exists but social media.@natematias @bwaber fantastic!
-
ex co-lead of OpenAIs‘s alignment team cites failure to prioritise safety as one of the motivating factors in his departureex co-lead of OpenAIs‘s alignment team cites failure to prioritise safety as one of the motivating factors in his departure
OpenAI created a team to control 'superintelligent' AI — then let it wither, source says | TechCrunch
A source reveals that OpenAI's Superalignment team, which was created to develop was to control 'superintelligent' AI, wasn't set up for success.
TechCrunch (techcrunch.com)
-
first issue of the Journal of Law and Empirical Analysis! #Lawfedifirst issue of the Journal of Law and Empirical Analysis! #Lawfedi
all articles OA
-
Quick Tulip update: Didn’t see her for a few days and started to worry since she getting up in years for an ant.@futurebird
I had no idea they could grow so old! -
New paper argues researchers should NOT consider real world implications of their work, presumably because the freedom to make empirically unfounded statements like women are ruining science with the rubbish brains evolution gave us is more important t...it's perfectly possible (to me) to understand a statement to be meaningless or understand it to be false, but answer a question such as: "Should scholars be discouraged from testing the veracity of this statement?"
or
"If the topic came up in a professional setting--for example, at a conference--how reluctant would you feel about sharing your beliefs on this topic openly?"
which is what the paper is about.....