Hacker News new | past | comments | ask | show | jobs | submit login

> they asked: who on earth are Tasha McCauley and Helen Toner?

As a prominent researcher in AI safety (I discovered prompt injection) I should explain that Helen Toner is a big name in the AI safety community - she’s one of the top 20 most respected people in our community, like Rohin Shah.

The “who on earth” question is a good question about Tasha. But grouping Helen in with Tasha is just sexist. By analogy, Tasha is like Kimbal Musk, whereas Helen is like Tom Mueller.

Tasha seems unqualified but Helen is extremely qualified. Grouping them together is sexist and wrong.




I'm not sure what you are referring to with "AI safety community". Her background isn't exactly a strong "AI safety" either before joining the board if you read her CV. I don't see there sexism asking these questions.


You’re definitely correct that her CV doesn’t adequately capture her reputation. I’ll put it this way, I meet with a lot of powerful people and I was equally nervous when I met with her as when I met with three officers of the US Air Force. She holds comparable influence to a USAF Colonel.

She is extremely knowledgeable about the details of all the ways that AI can cause catastrophe and also she’s one of the few people with the ear of the top leaders in the DoD (similar to Eric Schmidt in this regard).

Basically, she’s one of a very small number of people who are credibly reducing the chances that AI could cause a war.

If you’d like to learn about or become part of the AI safety community, a good way to start is to check out Rohin Shah’s Alignment Newsletter. http://rohinshah.com/alignment-newsletter/


> She holds comparable influence to a USAF Colonel.

That's... not very much? There are 4,400 colonels in the USAF: https://diversity.defense.gov/LinkClick.aspx?fileticket=gxMV...


So the rumors that she has strong ties with the CIA are not unfounded?


I think it's pretty safe to assume by now that US state-side counterintelligence has full control over AI safety circles, and given that everything is a nail when you're a hammer, the spooks probably consider these people domestic terrorists and process them accordingly, including active and passive prophylaxis. I mean, what proof do you need when people like OP lose sleep and brag about meeting the spooks a handful of times like it's the most important event of their lives! I guess it must be both empowering AND embarrassing for the spooks to have this situation turn into shit.


Her background looks pretty solid to me given that AI is a recent field:

https://scholar.google.com/citations?user=NNnQg0MAAAAJ&hl=en

https://futureshub.anu.edu.au/experts/helen-toner

And she setup the Center for Security and Emerging Technology at Georgetown so also has some degree of executive experience.


> Center for Security and Emerging Technology

Sounds like a bullshit job.


If you want to actually understand how the world works at scale, you're going to need think tanks like these that are academics and experts whose job is to research and understand issues. Or you could just watch whatever hatchet job is on Youtube or doing a Twitter screed.


Not seeing the sexism. I think the AI safety community is a little early and notoriety therein probably isn’t sufficient to qualify somebody to direct an $80 bn company.


The article refers to Tech Twitter, and that’s where the sexism is.

People on Twitter are making degrading memes of her and posting weird, creepy, harassing comments like this: “Why does Helen Toner have shelf tits” https://x.com/smolfeelshaver/status/1726073304136450511?s=46

Search for “Helen Toner” on Twitter and you will see she is being singled out for bullying by a bunch of weird creepy white dudes who I guess apparently work in tech.

> I think the AI safety community is a little early and notoriety therein probably isn’t sufficient to qualify somebody to direct an $80 bn company.

Normally you’d be right. In the specific case of OpenAI, however, their charter requires safety to be the number one priority of their directors, higher than making money or providing stable employment or anything else that a large company normally prioritizes. This is from OpenAI’s site:

“each director must perform their fiduciary duties in furtherance of its mission—safe AGI that is broadly beneficial” https://openai.com/our-structure


In my opinion...

If you want to be the titan of an industry and do things that put you at the center of media attention, you have to expect comments of this kind and not be surprised when they happen. Whether you are a man, a woman or anybody else.

If you don't expect "not very nice" or ambivalent reactions from people, you are an amateur and you shouldn't be in the board of such a prominent company.


True. I just think it’s messed up that she is equally qualified for this specific board (given the unusual fiduciary duty definition defined in the OpenAI charter I linked above) as e.g. Adam D’Angelo, and I don’t see a bunch of creepy people [note: I edited this part because fsckboy made a good point] hating on him despite him also being part of this same exact power struggle. What does founding Quora have to do with developing safe, beneficial AGI? If anything, Adam seems like more of a token board member than Helen, in that Adam is “token rich dude from early days at Google or Facebook”.


> I don’t see a bunch of creepy people [note: I edited this part because fsckboy made a good point] hating on him despite him also being part of this same exact power struggle.

I'm not sure where I am on the creepy scale but I'm happy to hate on him because I really don't think he should be anywhere near that board. And yes, Helen Toner does have a claim. Not sure about the level of experience in the typical role of a board member but plenty of cred when it comes to the subject matter.


He does seem token in that regard but it was a known quality of tokenism. Being a founder of Quora and a CTO at Facebook has enough cache to explain being on the board of directors for another tech company. I took Tasha and Helen being grouped together happening not due to being women but due to being relatively unknown


>...creepy white dudes

annnnd you just shredded your credibility with some casual bigotry


You’re right. I’m sorry for that blunder and I edited my comment to correct it.


Why? He was right on the money.


Because despite a sudden popularity of viewing white people as sub-human or non-human, they are in fact human, and this made that remark explicitly sexist and racist.


Being at the forefront of the development of new technologies means that you are sailing uncharted territory. It's my belief that in cases like this, previous qualifications are almost irrelevant.

Of course you have to be competent on the subject matter, work hard, iterate and have luck.


That’s a tweet with <500 views, a number of which I assume is because you linked it.

> weird creepy white dudes…

This is racism. And sexist. How do you know it’s white people or dudes?


I don’t know about this tweet’s author. I mean that most of the people slandering her on Twitter are white men based on their profile pictures. But you’re right that I still shouldn’t stereotype; I’m sorry.


Okay some guy posted that with literally no likes? “Weird loner is weird on the internet, gets no engagement, news at 11”


I agreed with you up until I looked her up. Then Tasha McCauley, who I forgot was Joseph Gordon-Levitt's wife. Shivon Zillis, Elon Musk's whatever that is.

So these three plus Mira Murati make 4 for 4 hot women governing OpenAI. I'm not a Data Scientist but that's a pattern. Not one ugly woman who has a concept of AI governance? Not one single George Elliot-looking genius?


"hot women", not quite, they're just not ugly.

Women are socially allowed to use artificial means to greatly improve their appearance. I'm referring to makeup and expensive hair treatments. Women from upper-middle or upper classes have even more of an advantage in using these. So if you're a thin woman, unless you were unlucky enough to be born disfigured, you're a single afternoon away of looking like a movie star.

If Sam Altman was socially allowed to wear makeup and a wig, you'd call him a heartthrob.


I noticed this too but it's easily explainable as a coincidence so calling it a pattern is a bit far stretched, especially when at least two of them have subject creds.


Oh I def think it's a pattern but I don't know that it's a bad pattern. Honestly, being married to Joseph Gordon-Levitt is probably a good sign.


Have you also mentally slotted all of the male board members into “hot” or “not” categories, or is this a treatment that only the women get?


Honestly, maybe you're right and they're all hot, but I'm not attracted to men so I can't really say. I would be surprised if women/et al found Altman or Sutskever physically attractive but who knows maybe they are kinda cute. What do you think?

Are you claiming that physical appearance has nothing to do with politics, or that we just shouldn't comment on it?

I think it's pretty obvious that the OpenAI men aren't too attractive by most standards, as opposed to say US presidents, who are mostly sexually attractive.


Where, pray, is the sexism?

This seems to be an instance of “if you hear the dog whistle you’re the dog”


Why group the two women together?

Not that I think either are competent


They're the only board members who aren't notable. After the ousting the board consisted of a prominent CEO in D'Angelo, a prominent researcher in Sutskever, and Toner and McCauley. It's a grouping of two randos, not two women.


Didn't you just answer your own question?


You have three people, one of them is at least quite well known to the outside world, or at least his / her company is well known along with an ex Facebook CTO title.

You have two people left, we have no idea what he / she is, their work are not public outside of specific domain, and no Public / PR exposure to even anyone who follows tech closely.

Those two people we group them together. And they happened to be woman. ( At least we assumed their gender ). And we are now being called sexist? Seriously?


How do you "discover" that user input can be used for injection attacks?


Good question. We were the first team to demonstrate that this type of vulnerability exists in LLMs. We then made an immediate responsible disclosure to OpenAI, which is confirmed as the first disclosure of its kind by OWASP:

https://github.com/OWASP/www-project-top-10-for-large-langua...

In the citations:

14. Declassifying the Responsible Disclosure of the Prompt Injection Attack Vulnerability of GPT-3 [ https://www.preamble.com/prompt-injection-a-critical-vulnera... ]: Preamble; earliest disclosure of Prompt Injection


You "discovered" trying to fool a chatbot? It's one of the first things everyone does, even with old generation chatbots before LLMs.

If so then 4chan had prior art, discovering prompt injections when they made Microsoft's Tay chatbot become a racist on twitter.


> As a prominent researcher in AI safety (I discovered prompt injection)...

You might want to try a little more humility when making statements like that. You might think it bolsters your credibility but for many it makes you look out of touch and egotistical.


You’re right. I’m sorry about that and have been behind on sleep during this whole debacle. You are 100% right and thank you for the kindness of letting me know. I just now tried to go back and update the comment to change it and say “my startup, Preamble, discovered prompt injection” so it’s less about me about more about our business. Unfortunately I’m past the HN comment editing time window but I wanted to write back and say that I took your feedback to heart and thank you.


Thanks!


They are being grouped together because they are the only two on the board with no qualifications. What is an AI safety commission? Tell engineers to make sure the AI is not bigoted?


> Tell engineers to make sure the AI is not bigoted?

That’s more the domain of “AI ethics” which I guess is cool but I personally think is much much much less important than AI safety.

AI safety is concerned with preventing human extinction due to (for example) AI causing accidental war or accidental escalation.

For example, making sure that AI won’t turn a heated conventional war into a nuclear war by being used for military intelligence analysis (writing summaries of the current status of a war) and then incorrectly saying that the other side is preparing for a nuclear first strike -- due to the AI being prone to hallucination, or to prompt injection or adversarial examples which can be injected by 3rd-party terrorists.

For more information on this topic, you can reference the recent paper ‘Catalytic nuclear war’ in the age of artificial intelligence & autonomy: Emerging military technology and escalation risk between nuclear-armed states:

https://doras.dcu.ie/25405/2/Catalytic%20nuclear%20war.pdf


So completely useless then since we are nowhere near that paradigm.


It's an industry/field of study that a small group of people with a lot of money think would be neat if it existed and they could get paid for, so they willed it into existence.

It has about as much real world applicability as those people who charge money for their classes on how to trade crypto. Or maybe "how to make your own cryptocurrency".

Not only does current AI not have that ability, it's not clear that AI with relevant capabilities will ever be created.

IMO it's born out of a generation having grown up on "Ghost in the Shell" imagining that if an intelligence exists and is running on silicon, it can magically hack and exist inside every connected device on earth. But "we can't prove that won't happen".


The hypotheticals explored in the article linked by upwardbound don't deal with an AI acting independently. They detail what could be soon possible for small terrorist groups: flooding social and news media with false information, images, or videos that imply one or more states are planning or about to use nuclear weapons. Responses to suspected nuclear launches have to be swift (article says 3 minutes), so tainting the data at a massive scale using AI would increase the chance of an actual launch.

The methods behind the different scenarios - disinformation, false-flagging, impersonation, stoking fear, exploiting the tools used to make the decisions - aren't new. States have all the capability to do them right now, without AI. But if a state did so, they would face annihilation if anyone found out what they were doing. And the manpower needed to run a large-scale disinformation campaign means a leak is pretty likely. So it's not worth it.

But, with AI, a small terrorist group could do it. And it'd be hard to know which ones were planning to, because they'd only need to buy the same hardware as any other small tech company.

(I hope I've summarized the article well enough.)


> But if a state did so, they would face annihilation if anyone found out what they were doing.

Like what happened to China after they released Tiktok, or what happened to Russia after they used their troll farms to affect public sentiment surrounding US elections?

"Flooding social media" isn't something difficult to do right now, with far below state-level resources. AIs don't come with built-in magical account-creation tools nor magical rate-limiter-removal tools. What changes with AI is the quality of the message that's crafted, nothing more.

No military uses tweets to determine if it has been nuked. AI doesn't provide a new vector to cause a nuclear war.


Great summary of several key points from the article, yes! If you’d like to check out other avenues by which AI could lead to war, check out the papers linked to from this working group I’m a part of callers DISARM:SIMC4: https://simc4.org


The literature is full of references to the application of the precautionary principle to the development of AI.

https://www.researchgate.net/publication/331744706_Precautio...

https://www.researchgate.net/publication/371166526_Regulatin...

https://link.springer.com/article/10.1007/s11569-020-00373-5

https://www.aei.org/technology-and-innovation/treading-caref...

https://itif.org/publications/2019/02/04/ten-ways-precaution...

It's clear what a branch of OpenAI thinks about this ...stuff..., they're making a career out of it. I agree with you!


Looked up Tasha.

Seems like she's run a robotics company or something like that. Definitely someone in the tech business.

Not everyone on a board needs to have done the exact thing that the company does.


She cofounded Fellow Robots, which shipped a product scanner on wheels to 11 stores[1] and then quietly folded up (homepage is now parked by GoDaddy).

[1] https://www.therobotreport.com/navii-autonomous-retail-robot...


> But grouping Helen in with Tasha is just sexist.

No, it's not. They're grouped together because everyone knows who Sama, Greg Brockman, Ilya, and Adam D'Angelo (Quora founder / FB CTO) are, and maybe 5% knew who Helen and Tasha are. You linked to a rando twitter user making fun of her, but I've seen far more putting down Ilya for his hairline.


You're appealing to your own prominence to assert that Helen Toner is extremely qualified. Looking at her career history, she is not.


She co-founded an entire think tank. A highly-respected one at that; CSET is up there as one of the top 5 think tanks on NatSec issues.

https://80000hours.org/podcast/episodes/helen-toner-on-secur...

    How might international security be altered if the impact of machine learning is similar in scope to that of electricity? Today’s guest — Helen Toner — recently helped found the Center for Security and Emerging Technology at Georgetown University to help policymakers prepare for any such disruptive technical changes that might threaten international peace.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: