kaberett: Trans symbol with Swiss Army knife tools at other positions around the central circle. (Default)
kaberett ([personal profile] kaberett) wrote2025-11-01 11:33 pm
Entry tags:

new site!

Today has been largely taken up by my first visit to the NEW SITE for Admin: the LRP...

... or at least, my first visit in something like twenty years, because it's the old Cottenham racecourse and I absolutely went to one (1) race there in My Misspent Youth. Sudden wave of déjà vu on the final approach to the grandstand, as the perspective shifted to YEP, THIS IS A PLACE I'VE BEEN.

There was Make Tent go Up. There was meeting. There was Make Tent Go Down. There was being given Objects. And there was A BAT that did some beautifully ostentatious swooping against the darkening dusk, and I am delighted.

hilarita: stoat hiding under a log (Default)
hilarita ([personal profile] hilarita) wrote2025-11-01 08:21 pm
Entry tags:

Alphabet fic game

Rules: How many letters of the alphabet have you used for [starting] a fic title? One fic per line, 'A' and 'The' do not count for 'a' and 't'. Post your score out of 26 at the end, along with your total fic count.

I haven't written fic for a while, so I can recall almost nothing about some of these. Quite a few will end up being HP, written before the author revealed herself to be a bigot who funded hate; I won't link to those. Some of my fic for other fandoms is untitled, or lost.

A: Anticipation
B
C: Charity Boy
D: Days Bought From Death Discworld fic for [personal profile] rmc28
E: The Edge of Doom
F: Fog in the Fens Man from U.N.C.L.E. fic for an exchange
G: Grindelwald: His Aims and Downfall
H
I: Insufferable Bastard
J
K
L: A Letter from Chippenham Georgette Heyer fic, probably for an exchange. 
M: Manifold Directions Pratchett fic, unfinished; or Maedhros and Fingon: A Romance if you prefer a finished Silmarillion fic.
N: The No 1 House-Elves Detection Agency
O: Optimism
P: Pinnacle Dr Who
Q
R: Return to Hogwarts
S: Sitting Target
T: To Change the World HP and Special Operations Executive RPF
U
V
W: Where did all our probes go? Clangers fic.
X
Y: Your MIssion
Z


17/26, 40 total works on AO3.
andrewducker: (Default)
andrewducker ([personal profile] andrewducker) wrote2025-11-01 05:43 pm
Entry tags:

Thoughts on the way home.

Glasgow still feels much more city-like to me than Edinburgh.

Which is probably why I prefer living in Edinburgh.

(Great to visit though)
andrewducker: (Default)
andrewducker ([personal profile] andrewducker) wrote2025-11-01 12:04 pm
Entry tags:

Photo cross-post


Sophia and Gideon making the DNA for their respective eye colours.

(First ever trip to the Glasgow Science Centre, it was awesome)
Original is here on Pixelfed.scot.

Schneier on Security ([syndicated profile] schneier_no_tracking_feed) wrote2025-10-31 09:06 pm
Schneier on Security ([syndicated profile] schneier_no_tracking_feed) wrote2025-10-31 11:08 am

Will AI Strengthen or Undermine Democracy?

Posted by Bruce Schneier

Listen to the Audio on NextBigIdeaClub.com

Below, co-authors Bruce Schneier and Nathan E. Sanders share five key insights from their new book, Rewiring Democracy: How AI Will Transform Our Politics, Government, and Citizenship.

What’s the big idea?

AI can be used both for and against the public interest within democracies. It is already being used in the governing of nations around the world, and there is no escaping its continued use in the future by leaders, policy makers, and legal enforcers. How we wire AI into democracy today will determine if it becomes a tool of oppression or empowerment.

1. AI’s global democratic impact is already profound.

It’s been just a few years since ChatGPT stormed into view and AI’s influence has already permeated every democratic process in governments around the world:

  • In 2022, an artist collective in Denmark founded the world’s first political party committed to an AI-generated policy platform.
  • Also in 2022, South Korean politicians running for the presidency were the first to use AI avatars to communicate with voters en masse.
  • In 2023, a Brazilian municipal legislator passed the first enacted law written by AI.
  • In 2024, a U.S. federal court judge started using AI to interpret the plain meaning of words in U.S. law.
  • Also in 2024, the Biden administration disclosed more than two thousand discrete use cases for AI across the agencies of the U.S. federal government.

The examples illustrate the diverse uses of AI across citizenship, politics, legislation, the judiciary, and executive administration.

Not all of these uses will create lasting change. Some of these will be one-offs. Some are inherently small in scale. Some were publicity stunts. But each use case speaks to a shifting balance of supply and demand that AI will increasingly mediate.

Legislators need assistance drafting bills and have limited staff resources, especially at the local and state level. Historically, they have looked to lobbyists and interest groups for help. Increasingly, it’s just as easy for them to use an AI tool.

2. The first places AI will be used are where there is the least public oversight.

Many of the use cases for AI in governance and politics have vocal objectors. Some make us uncomfortable, especially in the hands of authoritarians or ideological extremists.

In some cases, politics will be a regulating force to prevent dangerous uses of AI. Massachusetts has banned the use of AI face recognition in law enforcement because of real concerns voiced by the public about their tendency to encode systems of racial bias.

Some of the uses we think might be most impactful are unlikely to be adopted fast because of legitimate concern about their potential to make mistakes, introduce bias, or subvert human agency. AIs could be assistive tools for citizens, acting as their voting proxies to help us weigh in on larger numbers of more complex ballot initiatives, but we know that many will object to anything that verges on AIs being given a vote.

But AI will continue to be rapidly adopted in some aspects of democracy, regardless of how the public feels. People within democracies, even those in government jobs, often have great independence. They don’t have to ask anyone if it’s ok to use AI, and they will use it if they see that it benefits them. The Brazilian city councilor who used AI to draft a bill did not ask for anyone’s permission. The U.S. federal judge who used AI to help him interpret law did not have to check with anyone first. And the Trump administration seems to be using AI for everything from drafting tariff policies to writing public health reports—with some obvious drawbacks.

It’s likely that even the thousands of disclosed AI uses in government are only the tip of the iceberg. These are just the applications that governments have seen fit to share; the ones they think are the best vetted, most likely to persist, or maybe the least controversial to disclose.

3. Elites and authoritarians will use AI to concentrate power.

Many Westerners point to China as a cautionary tale of how AI could empower autocracy, but the reality is that AI provides structural advantages to entrenched power in democratic governments, too. The nature of automation is that it gives those at the top of a power structure more control over the actions taken at its lower levels.

It’s famously hard for newly elected leaders to exert their will over the many layers of human bureaucracies. The civil service is large, unwieldy, and messy. But it’s trivial for an executive to change the parameters and instructions of an AI model being used to automate the systems of government.

The dynamic of AI effectuating concentration of power extends beyond government agencies. Over the past five years, Ohio has undertaken a project to do a wholesale revision of its administrative code using AI. The leaders of that project framed it in terms of efficiency and good governance: deleting millions of words of outdated, unnecessary, or redundant language. The same technology could be applied to advance more ideological ends, like purging all statutory language that places burdens on business, neglects to hold businesses accountable, protects some class of people, or fails to protect others.

Whether you like or despise automating the enactment of those policies will depend on whether you stand with or are opposed to those in power, and that’s the point. AI gives any faction with power the potential to exert more control over the levers of government.

4. Organizers will find ways to use AI to distribute power instead.

We don’t have to resign ourselves to a world where AI makes the rich richer and the elite more powerful. This is a technology that can also be wielded by outsiders to help level the playing field.

In politics, AI gives upstart and local candidates access to skills and the ability to do work on a scale that used to only be available to well-funded campaigns. In the 2024 cycle, Congressional candidates running against incumbents like Glenn Cook in Georgia and Shamaine Daniels in Pennsylvania used AI to help themselves be everywhere all at once. They used AI to make personalized robocalls to voters, write frequent blog posts, and even generate podcasts in the candidate’s voice. In Japan, a candidate for Governor of Tokyo used an AI avatar to respond to more than eight thousand online questions from voters.

Outside of public politics, labor organizers are also leveraging AI to build power. The Worker’s Lab is a U.S. nonprofit developing assistive technologies for labor unions, like AI-enabled apps that help service workers report workplace safety violations. The 2023 Writers’ Guild of America strike serves as a blueprint for organizers. They won concessions from Hollywood studios that protect their members against being displaced by AI while also winning them guarantees for being able to use AI as assistive tools to their own benefit.

5. The ultimate democratic impact of AI depends on us.

If you are excited about AI and see the potential for it to make life, and maybe even democracy, better around the world, recognize that there are a lot of people who don’t feel the same way.

If you are disturbed about the ways you see AI being used and worried about the future that leads to, recognize that the trajectory we’re on now is not the only one available.

The technology of AI itself does not pose an inherent threat to citizens, workers, and the public interest. Like other democratic technologies—voting processes, legislative districts, judicial review—its impacts will depend on how it’s developed, who controls it, and how it’s used.

Constituents of democracies should do four things:

  • Reform the technology ecosystem to be more trustworthy, so that AI is developed with more transparency, more guardrails around exploitative use of data, and public oversight.
  • Resist inappropriate uses of AI in government and politics, like facial recognition technologies that automate surveillance and encode inequity.
  • Responsibly use AI in government where it can help improve outcomes, like making government more accessible to people through translation and speeding up administrative decision processes.
  • Renovate the systems of government vulnerable to the disruptive potential of AI’s superhuman capabilities, like political advertising rules that never anticipated deepfakes.

These four Rs are how we can rewire our democracy in a way that applies AI to truly benefit the public interest.

This essay was written with Nathan E. Sanders, and originally appeared in The Next Big Idea Club.

QC RSS ([syndicated profile] questionable_content_feed) wrote2025-10-30 09:40 pm

5691

personally I'd never ask someone about their pronouns like that, but Clinton is nosy

kaberett: Trans symbol with Swiss Army knife tools at other positions around the central circle. (Default)
kaberett ([personal profile] kaberett) wrote2025-10-30 10:11 pm

today my most important job was Pointy Objects

I supplied knives and fine motor control; the toddler supplied art direction; the toddler's resident adults supplied outlines for me to cut around (and candles, and matches, and in fact all of the cutting of the tiny pumpkin).

one large and one small pumpkin, carved, with candles, in the dark

andrewducker: (Dr Who)
andrewducker ([personal profile] andrewducker) wrote2025-10-30 05:45 pm

Life with two kids: Wednesday shoes

This morning Sophia announced, as we were about to leave the house, that she couldn't find her school shoes.

Her black school shoes.

The ones that are and integral part of her Wednesday costume. For the school Halloween disco. This evening.

Jane and I frantically tore the house apart for fifteen minutes and checked *everywhere*. Eventually we forced her, crying, to put on her trainers, promising her that if her shoes turned up we would bring them in to her.

Because we left fifteen minutes late we missed the bus. And so it was that we were halfway through the walk to school when Sophia quietly said "Oh."

And then told me that she'd just remembered that yesterday she'd come home from school in her welly boots, leaving her shoes at her peg.

You'll be delighted to hear that I didn't murder her.
Schneier on Security ([syndicated profile] schneier_no_tracking_feed) wrote2025-10-30 11:05 am

The AI-Designed Bioweapon Arms Race

Posted by Bruce Schneier

Interesting article about the arms race between AI systems that invent/design new biological pathogens, and AI systems that detect them before they’re created:

The team started with a basic test: use AI tools to design variants of the toxin ricin, then test them against the software that is used to screen DNA orders. The results of the test suggested there was a risk of dangerous protein variants slipping past existing screening software, so the situation was treated like the equivalent of a zero-day vulnerability.

[…]

Details of that original test are being made available today as part of a much larger analysis that extends the approach to a large range of toxic proteins. Starting with 72 toxins, the researchers used three open source AI packages to generate a total of about 75,000 potential protein variants.

And this is where things get a little complicated. Many of the AI-designed protein variants are going to end up being non-functional, either subtly or catastrophically failing to fold up into the correct configuration to create an active toxin.

[…]

In any case, DNA sequences encoding all 75,000 designs were fed into the software that screens DNA orders for potential threats. One thing that was very clear is that there were huge variations in the ability of the four screening programs to flag these variant designs as threatening. Two of them seemed to do a pretty good job, one was mixed, and another let most of them through. Three of the software packages were updated in response to this performance, which significantly improved their ability to pick out variants.

There was also a clear trend in all four screening packages: The closer the variant was to the original structurally, the more likely the package (both before and after the patches) was to be able to flag it as a threat. In all cases, there was also a cluster of variant designs that were unlikely to fold into a similar structure, and these generally weren’t flagged as threats.

The research is all preliminary, and there are a lot of ways in which the experiment diverges from reality. But I am not optimistic about this particular arms race. I think that the ability of AI systems to create something deadly will advance faster than the ability of AI systems to detect its components.

kaberett: Photo of a pile of old leather-bound books. (books)
kaberett ([personal profile] kaberett) wrote2025-10-29 09:48 pm
Entry tags:

[pain] working on an articulation

I have, in the latest book, got to The Obligatory Page And A Half On Descartes, but this one makes a point of describing it as a "reductionistic approach".

The Thing Is, of course, that much like the Bohr model (for all that's 250 years younger, give or take), for many and indeed quite plausibly most purposes, The Cartesian Model Of Pain is, for most people and for most purposes, good enough: if you've got to GCSE level then you'll have met the Bohr model; if you get to A-level, you'll start learning about atomic orbitals; and then by the time I was starting my PhD I had to throw out the approximation of atomic nuclei as volumeless points (the reason you get measurable and interpretable stable isotope fractionations of thallium is -- mostly! -- down to the nuclear field shift effect).

Similarly, most of the time you don't actually need to know anything beyond the lie-to-children first-approximation of "if you're experiencing pain, that means something is damaging you, so work out what it is and stop doing that". The Bohr model is good enough for a general understanding of atomic bonds and chemical reactions; specificity theory is good enough for day-to-day encounters with acute pain.

The problem with specificity theory isn't actually that it's wrong (although it is); it's that it gets misapplied in cases where Something More Complicated is going on in ways that obscure even the possibility of Something More Complicated. The problem, as far as I'm concerned, is that it doesn't get presented with the footnote of "this isn't the whole story, and for understanding anything beyond very short-term acute pain you need to go into considerably more detail". But most people aren't in more complex pain than that! Estimates run at ~20% of the population living with chronic pain, but even if we accept the 43% that sometimes gets quoted about the UK, most people do not live with chronic pain.

There's probably an analogy here with the "Migraine Is Not Just A Bad Headache" line (and indeed I'm getting increasingly irritated with all of these books discussing migraine as though the problem is solely and entirely the pain, as opposed to, you know, the rest of the disabling neurological symptoms) but I'm upping my amitriptyline again and it's past my bedtime so I'm not going to work all the details of that out now, but, like, Pain Is Not Just A Tissue Damage, style of thing.

Anyway. The point is that I still haven't actually read Descartes (I've got the posthumously published and much more posthumously translated Treatise on Man in PDF, I just haven't got to it yet) and nonetheless I am bristling at people describing him as reductionist (derogatory). Just. We aren't going to do better if we also persist in wilful misunderstandings and misrepresentations for the sake of slagging off someone who has been dead for three hundred and seventy-five years instead of recognising the actual value inherent in "good enough for most people most of the time", and how that value complicates attempts at more nuance! How about we actually acknowledge the reasons the idea is so compelling, huh, and discuss the circumstances under which the approximation holds versus breaks down? How about that for an idea.