מסמכים חדשים חושפים את מאחורי הקלעים של הקמת חברת OpenAi
כזכור, אילון מאסק היה בצוות המייסדים של חברת אופן איי.איי והוא אף מימן את הפעילות בתחילת דרכה, אך עזב לאחר חילוקי דעות עם המייסדים הנוספים וסאם אלטמן בראשם. נראה שאילון לא בנוי לעבודה עם שותפים. בכל מקרה, מסמכי חדשים הוגשו לבית המשפט במסגרת התביעה של אילון מאסק נגד OpenAI וכך נחשפו מיילים פנימיים משנותיה הראשונות של החברה – והמיילים לא מאכזבים. הם חושפים מתחים מוקדמים עזים סביב שליטה, גיוס כישרונות, וחששות מדומיננטיות של בינה מלאכותית.
הרבה כסף נשפך כדי למנוע מגוגל לגנוב טאלנטים
המיילים שנחשפו הם משנים 2015-2018, מכסים את תקופת ההקמה של OpenAI ועד עזיבתו של מאסק. ההנהגה דאגה משליטה על AGI, כשאיליה סוצקבר הזהיר מתרחישי "דיקטטורה" פוטנציאליים מגוגל או מבפנים. מאבקי שכר פרצו כשDeepMind ניסתה לגייס את צוות המייסדים של OpenAI, מה שאילץ העלאות שכר מהירות של 100-200 אלף דולר לאדם. עלו ויכוחים פנימיים סביב שיתוף הפעולה עם מיקרוסופט, כשמאסק אמר שישלם "50 מיליון דולר כדי לא להיראות כמו הכלבה השיווקית של מיקרוסופט". תפקידו של אלטמן גם עורר חששות, כשסוצקבר מטיל ספק ב'פונקציית העלות' שלו ואם AGI היא המוטיבציה העיקרית שלו.
מיילים במערכה הראשונה כרמז למערכה השלישית
חילופי המיילים האלה הם ממצאים מדהימים מימיה הראשונים של OpenAI ושופכים אור נוסף על המאבק של מאסק נגד אלטמן והזרמים התת-קרקעיים שאולי תרמו לכאוס של הדחת הדירקטוריון בנובמבר 2023 ולטלנובלה שמתפתחת מאז וגרמה לעזיבת כל המייסדים למעט שלושה.
רגע, אפשר לקרוא את המיילים?!
ובכן, כן, לגמרי. הנה כמה מהם. בתיאבון.
Subject: question
Sam Altman to Elon Musk – May 25, 2015 9:10 PM
Been thinking a lot about whether it's possible to stop humanity from developing AI.
I think the answer is almost definitely not.
If it's going to happen anyway, it seems like it would be good for someone other than Google to do it first.
Any thoughts on whether it would be good for YC to start a Manhattan Project for AI? My sense is we could get many of the top ~50 to work on it, and we could structure it so that the tech belongs to the world via some sort of nonprofit but the people working on it get startup-like compensation if it works. Obviously we'd comply with/aggressively support all regulation.
Sam
Elon Musk to Sam Altman – May 25, 2015 11:09 PM
Probably worth a conversation
Sam Altman to Elon Musk – Jun 24, 2015 10:24 AM
The mission would be to create the first general AI and use it for individual empowerment—ie, the distributed version of the future that seems the safest. More generally, safety should be a first-class requirement.
I think we’d ideally start with a group of 7-10 people, and plan to expand from there. We have a nice extra building in Mountain View they can have.
I think for a governance structure, we should start with 5 people and I’d propose you, Bill Gates, Pierre Omidyar, Dustin Moskovitz, and me. The technology would be owned by the foundation and used “for the good of the world”, and in cases where it’s not obvious how that should be applied the 5 of us would decide. The researchers would have significant financial upside but it would be uncorrelated to what they build, which should eliminate some of the conflict (we’ll pay them a competitive salary and give them YC equity for the upside). We’d have an ongoing conversation about what work should be open-sourced and what shouldn’t. At some point we’d get someone to run the team, but he/she probably shouldn’t be on the governance board.
Will you be involved somehow in addition to just governance? I think that would be really helpful for getting work pointed in the right direction getting the best people to be part of it. Ideally you’d come by and talk to them about progress once a month or whatever. We generically call people involved in some limited way in YC “part-time partners” (we do that with Peter Thiel for example, though at this point he’s very involved) but we could call it whatever you want. Even if you can’t really spend time on it but can be publicly supportive, that would still probably be really helpful for recruiting.
I think the right plan with the regulation letter is to wait for this to get going and then I can just release it with a message like “now that we are doing this, I’ve been thinking a lot about what sort of constraints the world needs for safefy.” I’m happy to leave you off as a signatory. I also suspect that after it’s out more people will be willing to get behind it.
Sam
Elon Musk to Sam Altman – Jun 24, 2015 11:05 PM
Agree on all
Subject: follow up from call
Greg Brockman to Elon Musk, (cc: Sam Altman) – Nov 22, 2015 6:11 PM
Hey Elon,
Nice chatting earlier.
As I mentioned on the phone, here's the latest early draft of the blog post: https://quip.com/6YnqA26RJgKr. (Sam, Ilya, and I are thinking about new names; would love any input from you.)
Obviously, there's a lot of other detail to change too, but I'm curious what you think of that kind of messaging. I don't want to pull any punches, and would feel comfortable broadcasting a stronger message if it feels right. I think it's mostly important that our messaging appeals to the research community (or at least the subset we want to hire). I hope for us to enter the field as a neutral group, looking to collaborate widely and shift the dialog towards being about humanity winning rather than any particular group or company. (I think that's the best way to bootstrap ourselves into being a leading research institution.)
I've attached the offer letter template we've been using, with a salary of $175k. Here's the email template I've been sending people:
Attached is your official YCR offer letter! Please sign and date the your convenience. There will be two more documents coming:
A separate letter offering you 0.25% of each YC batch you are present for (as compensation for being an Advisor to YC).
The At-Will Employment, Confidential Information, Invention Assignment and Arbitration Agreement
(As this is the first batch of official offers we've done, please forgive any bumpiness along the way, and please let me know if anything looks weird!)
We plan to offer the following benefits:
Health, dental, and vision insurance
Unlimited vacation days with a recommendation of four weeks per year
Paid parental leave
Paid conference attendance when you are presenting YC AI work or asked to attend by YC AI
We're also happy to provide visa support. When you're ready to talk about visa-related questions, I'm happy to put you in touch with Kirsty from YC.
Please let me know if you have any questions — I'm available to chat any time! Looking forward to working together :).
– gdb
Subject: Draft opening paragraphs
Elon Musk to Sam Altman – Dec 8, 2015 9:29 AM
It is super important to get the opening summary section right. This will be what everyone reads and what the press mostly quotes. The whole point of this release is to attract top talent. Not sure Greg totally gets that.
—- OpenAI is a non-profit artificial intelligence research company with the goal of advancing digital intelligence in the way that is most likely to benefit humanity as a whole, unencumbered by an obligation to generate financial returns.
The underlying philosophy of our company is to disseminate AI technology as broadly as possible as an extension of all individual human wills, ensuring, in the spirit of liberty, that the power of digital intelligence is not overly concentrated and evolves toward the future desired by the sum of humanity.
The outcome of this venture is uncertain and the pay is low compared to what others will offer, but we believe the goal and the structure are right. We hope this is what matters most to the best in the field.
Sam Altman to Elon Musk – Dec 8, 2015 10:34 AM
how is this?
__
OpenAI is a non-profit artificial intelligence research company with the goal of advancing digital intelligence in the way that is most likely to benefit humanity as a whole, unencumbered by an obligation to generate financial returns.
Because we don't have any financial obligations, we can focus on the maximal positive human impact and disseminating AI technology as broadly as possible. We believe AI should be an extension of individual human wills and, in the spirit of liberty, not be concentrated in the hands of the few.
The outcome of this venture is uncertain and the pay is low compared to what others will offer, but we believe the goal and the structure are right. We hope this is what matters most to the best in the field.
Subject: just got word…
Sam Altman to Elon Musk – Dec 11, 2015 11:30AM
that deepmind is going to give everyone in openAI massive counteroffers tomorrow to try to kill it.
do you have any objection to me proactively increasing everyone's comp by 100-200k per year? i think they're all motivated by the mission here but it would be a good signal to everyone we are going to take care of them over time.
sounds like deepmind is planning to go to war over this, they've been literally cornering people at NIPS.
Elon Musk to Sam Altman – Dec 11, 2015
Has Ilya come back with a solid yes?
If anyone seems at all uncertain, I’m happy to call them personally too. Have told Emma this is my absolute top priority 24/7.
Sam Altman to Elon Musk – Dec 11, 2015 12:15 PM
yes committed committed. just gave his word.
Elon Musk to Sam Altman – Dec 11, 2015 12:32 PM
awesome
Sam Altman to Elon Musk – Dec 11, 2015 12:35 PM
everyone feels great, saying stuff like "bring on the deepmind offers, they unfortunately dont have 'do the right thing' on their side"
news out at 130 pm pst
Subject: The OpenAI Company
Elon Musk to: Ilya Sutskever, Pamela Vagata, Vicki Cheung, Diederik Kingma, Andrej Karpathy, John D. Schulman, Trevor Blackwell, Greg Brockman, (cc:Sam Altman) – Dec 11, 2015 4:41 PM
Congratulations on a great beginning!
We are outmanned and outgunned by a ridiculous margin by organizations you know well, but we have right on our side and that counts for a lot. I like the odds.
Our most important consideration is recruitment of the best people. The output of any company is the vector sum of the people within it. If we are able to attract the most talented people over time and our direction is correctly aligned, then OpenAI will prevail.
To this end, please give a lot of thought to who should join. If I can be helpful with recruitment or anything else, I am at your disposal. I would recommend paying close attention to people who haven't completed their grad or even undergrad, but are obviously brilliant. Better to have them join before they achieve a breakthrough.
Looking forward to working together,
Elon
Subject: compensation framework
Greg Brockman to Elon Musk, (cc: Sam Altman) – Feb 21, 2016 11:34 AM
Hi all,
We're currently doing our first round of full-time offers post-founding. It's obviously super important to get these right, as the implications are very long-term. I don't yet feel comfortable making decisions here on my own, and would love any guidance.
Here's what we're currently doing:
Founding team: $275k salary + 25bps of YC stock
– Also have option of switching permanently to $125k annual bonus or equivalent in YC or SpaceX stock. I don't know if anyone's taken us up on this.
New offers: $175k annual salary + $125k annual bonus || equivalent in YC or SpaceX stock. Bonus is subject to performance review, where you may get 0% or significantly greater than 100%.
Special cases: gdb + Ilya + Trevor
The plan is to keep a mostly flat salary, and use the bonus multiple as a way to reward strong performers.
Some notes:
– We use a 20% annualized discount for the 8 years until the stock becomes liquid, the $125k bonus equates to 12bps in YC. So the terminal value is more like $750k. This number sounds a lot more impressive, though obviously it's hard to value exactly.
– The founding team was initially offered $175k each. The day after the lab launched, we proactively increased everyone's salary by $100k, telling them that we are financially committed to them as the lab becomes successful, and asking for a personal promise to ignore all counteroffers and trust we'll take care of them.
– We're currently interviewing Ian Goodfellow from Brain, who is one of the top 2 scientists in the field we don't have (the other being Alex Graves, who is a DeepMind loyalist). He's the best person on Brain, so Google will fight for him. We're grandfathering him into the founding team offer.
Some salary datapoints:
– John was offered $250k all-in annualized at DeepMind, thought he could negotiate to $300k easily.
– Wojciech was verbally offered ~$1.25M/year at FAIR (no concrete letter though)
– Andrew Tulloch is getting $800k/year at FB. (A lot is stock which is vesting.)
– Ian Goodfellow is currently getting $165k cash + $600k stock/year at Google.
– Apple is a bit desperate and offering people $550k cash (plus stock, presumably). I don't think anyone good is saying yes.
Two concrete candidates that are on my mind:
– Andrew is very close to saying yes. However, he's concerned about taking such a large paycut.
– Ian has stated he's not primarily concerned with money, but the Bay Area is expensive / wants to make sure he can buy a house. I don't know what will happen if/when Google starts throwing around the numbers they threw at Ilya.
My immediate questions:
1. I expect Andrew will try to negotiate up. Should we stick to his offer, and tell him to only join if he's excited enough to take that kind of paycut (and that others have left more behind)?
2. Ian will be interviewing + (I'm sure) getting an offer on Wednesday. Should we consider his offer final, or be willing to slide depending on what Google offers?
3. Depending on the answers to 1 + 2, I'm wondering if this flat strategy makes sense. If we keep it, I feel we'll have to really sell people on the bonus multiplier. Maybe one option would be using a signing bonus as a lever to get people to sign?
4. Very secondary, but our intern comp is also below market: $9k/mo. (FB offers $9k + free housing, Google offers like $11k/mo all-in.) Comp is much less important to interns than to FT people, since the experience is primary. But I think we may have lost a candidate who was on the edge to this. Given the dollar/hour is so much lower than for FT, should we consider increasing the amount?
I'm happy to chat about this at any time.
– gdb
Elon Musk to Greg Brockman, (cc: Sam Altman) – Feb 22, 2016 12:09 AM
We need to do what it takes to get the top talent. Let's go higher. If, at some point, we need to revisit what existing people are getting paid, that's fine.
Either we get the best people in the world or we will get whipped by Deepmind.
Whatever it takes to bring on ace talent is fine by me.
Deepmind is causing me extreme mental stress. If they win, it will be really bad news with their one mind to rule the world philosophy. They are obviously making major progress and well they should, given the talent level over there.
Greg Brockman to Elon Musk, (cc: Sam Altman) – Feb 22, 2016 12:21 AM
Read you loud and clear. Sounds like a plan. Will plan to continue working with sama on specifics, but let me know if you'd like to be kept in the loop.
– gdb