871 comments

  • dang 2 days ago ago

    All: please stick to thoughtful, substantive discussion. You may not owe you-know-whom better, but you owe this community better if you're participating in it.

    If you don't have a thoughtful, substantive comment to add, not commenting is also a good option. There are quite a few interesting submissions to talk about.

    https://news.ycombinator.com/newsguidelines.html

  • Imnimo 2 days ago ago

    I think the problem for xAI is that it can really only hire two types of researchers - people who are philosophically aligned with Elon, and people who are solely money-motivated (not a judgment). But frontier AI research is a field with a lot of top talent who have strong philosophical motivation for their work, and those philosophies are often completely at odds with Elon. OpenAI and Anthropic have philosophical niches that are much better at attracting the current cream of the crop, and I don't really see how xAI can compete with that.

    • jazzpush2 2 days ago ago

      In an interview with xAI I was literally told that certain parts of the model have to align with Elon, and that Elon can call us and demand anything at anytime. No thanks!

      • jarrettcoggin 2 days ago ago

        From my time at Tesla, this is 100% the case. When Elon asked for something, it was “drop what you are doing and deliver it”, then you got pressed to still deliver the thing you were already working on against the original timeline before the interrupt.

        • dgxyz 2 days ago ago

          Oh I worked at one of them.

          I found the best thing to do was to ignore the interrupts and carry on until they kick you on the street. Then watch from a safe distance as all the stuff you were holding together shits the bed.

          • jarrettcoggin 2 days ago ago

            Definitely one approach to the circumstances. I tried some variation of this and it blew up in my face (as I expected ).

            Towards the end of my time there, a “fixer” was brought in to shore up the team that I was working on. The “fixer” also became my manager when they were brought on.

            The “fixer” proceeded to fire 70+% of the team over the course of 6-8 months and install a bunch of yes people, in addition to wasting about $2,000,000 on a subscription to rebuild our core product with a framework product no one on the team knew. I was told to deploy said framework product on top of Kubernetes (which not a single person on my team had any experience with) while delivering on other in-flight projects. I ignored the whole thing.

            I ended up deciding I was done with Tesla and went into a regularly scheduled 1:1 with my manager (the “fixer”) with a written two-weeks notice in hand, only to be fired (with 6-weeks severance, thankfully) before I was able to say anything about giving notice.

            One of the best ways to get fired in my opinion.

            • pm90 2 days ago ago

              Out of curiosity, it sounds like you're the kind of person that could easily find another job. Why slog it out until the end rather than quit/find a better gig? Genuinely interested because every time I've ended up with a manager like that my mental health has suffered so now I generally start planning my exit as soon as I'm stuck with a bad manager.

              • hananova 2 days ago ago

                Ethically, if you do not agree with the company you work at, the optimal course of action if you can stomach it is to stay and do a bad job rather than get replaced by someone who might do a good job.

                I have been in such a situation before, and while I was not able to coast along until the company went under, the time delta between me getting fired and the company going under was measured in weeks.

                In hindsight I'd probably not do it again, it was hugely mentally taxing, and knowingly performing work in such a way that it provides negative value to the company (remember, the goal is to make it go under) is in my experience actually harder than just doing a good job... Especially if being covert is a goal.

                • jkubicek 2 days ago ago

                  Have you read the CIA’s Simple Sabotage Field Manual?

                  https://www.cia.gov/static/5c875f3ec660e092cf893f60b4a288df/...

                  • MengerSponge 2 days ago ago

                    I've seen it, but I think it's got some places that it would benefit from more clarity. Can we put together a committee to improve and protect our processes from it? We could call it a task force if that's easier to sell to management.

                  • malikolivier 2 days ago ago

                    I did not know the existence of this manual. It was a very interesting read! Especially after page 28 (General Interference with Organizations and Production).

                • jstummbillig 2 days ago ago

                  > Ethically, if you do not agree with the company you work at, the optimal course of action if you can stomach it is to stay and do a bad job rather than get replaced by someone who might do a good job.

                  What...? In what way is it anything other than highly unethical to sabotage someone you have a contract with, because you disagree with them?

                  • 47282847 2 days ago ago

                    Plenty of historical examples of work environments where sabotage would have been the most ethical thing to do (and often you will only know in hindsight). But yeah in most circumstances a simple disagreement doesn’t warrant the psychological cost of such sabotage.

                    • tremon 2 days ago ago

                      the psychological cost of such sabotage

                      Of course. One always needs to weigh it against the psychological cost of complying with unethical directions.

                    • jstummbillig 2 days ago ago

                      What do you mean...? Plenty to do what?

                      Your opinion of the situation is not enough to justify this course of action in 99.99% of cases and the residual 0.01% should not be enough to fuel your ego to do anything other than quit decently, and look for an employer that is more aligned with whatever your ideals are.

                      I repeat the insane statement that we are arguing over here: "Ethically, if you do not agree with the company you work at, the optimal course of action if you can stomach it is to stay and do a bad job rather than get replaced by someone who might do a good job."

                      This says: ANY company you work for and disagree with over anything: Don't quit! Sabotage [maybe people are confused about what "do a bad job" means, and that this usually leads to other people getting hurt in some way, directly or indirectly, unless your job is entirely inconsequential]. And that's supposed to be ethically optimal.

                      What the fuck?

                      • berdario 2 days ago ago

                        I think there's a bit of confusion between

                        > (Ethically, if you do not agree with the company you work at), the optimal course of action is..

                        And

                        > Ethically, (if you do not agree with the company you work at, the optimal course of action is...)

                        The former, should've probably been phrased "if you do not agree ethically with the company you work at, the optimal course of action is..."

                        First example that comes to mind, about a movie that portrays ethical sabotage is

                        https://en.wikipedia.org/wiki/Schindler%27s_List

                        I'm actually a bit unsure about what could be the motivations of someone who engages in sabotage *not* for ethical reasons

                      • andrewingram 2 days ago ago

                        There's a _big_ continuum between disagreeing over something and an ethical hard line, it feels like a slippery slope to interpete a suggested approach for one end of that line as advocacy for applying that same approach to the other end.

                      • mcherm 2 days ago ago

                        A specific example will help.

                        Imagine I am working for a company and I discover they are engaged in capturing and transporting human slaves. Furthermore, the government where they operate in fully aware and supportive of their actions so denouncing them publicly is unlikely to help. This is a real situation that has happened to real people at points in history in my own country.

                        I believe that one ethical response would be to violate my contract with the company by assisting slaves to escape and even providing them with passage to other places where slavery is illegal.

                        Now, if you agree with the ethics of the example I gave then you agree in principle that this can be ethical behavior and what remains to be debated is whether xAI's criminal behavior and support from the government rise to this same level. I know many who think that badly aligned AI could lead to the extinction of the human race, so the potential harm is certainly there (at least some believe it is), and I think the government support is strong enough that denouncing xAI for unethical behavior wouldn't cause the government to stop them.

                        • jstummbillig 2 days ago ago

                          I have no clue why people are so confused here.

                          a) I understand the very few and specific examples, that would justify and require disobedience. In those cases just doing a "bad job" seems super lame and inconsequential. I would ask more of anyone, including myself.

                          b) all other examples, the category that parent opened so broadly, are simply completely silly, is what I take offensive with. If you think simply disagreeing with anyone you have entered a contract with is cause for sabotaging them, and painting that as ethically superior, then, I repeat: what the fuck?

                          c) If you suspect criminal behavior then alarm the authorities or the press. What are you going to do on the inside? What vigilante spy story are we telling ourselves here?

                          • 47282847 a day ago ago

                            Some people in this thread seem to come from a place of morality where some “higher truth” exists outside of the sphere of the individual to guide one’s actions, and yet others even seem to weakly disguise their own ethics and beliefs behind a framework of alleged “rationality”, as if there was mathematical precision behind what is the “right” action and which is clearly wrong — and anybody that just doesn’t get it must be either an idiot or clinically insane. By which I completely dismiss not only opinion but also individual circumstances.

                            In reality, which actions a person considers ethical and in coherence with their own values is highly individual. I can be friends or colleague with somebody who has a different set of ethics and circumstances than me. If I were to turn this into a conflict that needs resolution each time it shows, I would set myself up for eternal (life long) war with my social environment. Some will certainly enjoy that, and get a sense of purpose and orientation from it! I prefer not to, and I can find totally valid and consistent arguments for each side. No need to agree to reach understanding, and respect our differences.

                            Typically, people value belonging over morality: they adapt to whatever morality guarantees their own survival. The need to belong is a fundamental need; we are social animals not made to survive on our own.

                            The moment I am puzzled about another persons reasoning I can ask and if they are willing they will teach me why their actions make sense to them. If I come from a place of curiosity and sincere interest, people will be happy to help me get over my confusion. If I approach that conversation from some higher ground, as some kind of missionary, I might succeed sometimes, but fail most times, as I would pose a threat to their coherence, which they will remove one way or another.

                            • foldr 13 hours ago ago

                              Ah, but if there’s no higher truth, then you also can’t say that it’s wrong to sabotage your employer because of an ethical disagreement (or rather, you can say it, but it’s just your personal opinion). By condemning this course of action, the OP presupposes some sort of objective ethical standard.

                      • 2 days ago ago
                        [deleted]
                  • swiftcoder 2 days ago ago

                    > In what way is it anything other than highly unethical to sabotage someone

                    Ethics is more complicated than that. Is it unethical to sabotage your employer if your employed is themselves acting unethically?

                  • freeone3000 2 days ago ago

                    Have we gotten so lost that “working against your enemies” is no longer something we aspire to do?

                  • Teever 2 days ago ago

                    "Don't struggle only within the ground rules that the people you're struggling against have laid down." -- Malcolm X

                    "If you're unhappy with your job you don't strike. You just go in there every day, and do it really half-assed. That's the American way. -- Homer Simpson

                    "To steal from a brother or sister is evil. To not steal from the institutions that are the pillars of the Pig Empire is equally immoral." -- Abbie Hoffman

                    Some might consider it unethical but others might also consider it immoral to not do what you're describing.

                    I guess you're fortunate enough to have only worked at places where your moral framework matched up with their business practices and treatment of the staff.

                    That isn't the case for most people. Most people are put into situations at one time or another where the people they're working for don't value them as equals, where the people they work for casually violate reasonable laws like product safety or enivronmental standards laws and what's worse these people will suffer no consequences for doing so.

                    No White Knight in shining armour is going to come from the government to shut them down. No lightning from heaven will strike them down. No financial penalty to dissuade them from further defection from society and the common man in the game that is life.

                    So what do you do? Do you do nothing? Just put your nose to the grindstone and keep working for the man? Do you quit, only to end up penniless and jobless, with poor prospects of an alternative, and even if you found one maybe it's 'meet the new boss same as the old boss'?

                    Nah, you come into work every day and you subtly fuck it up. You subtly fuck it up and you take whatever value you can extract.

                    They'd do the same to you.

                    They are doing the same to you.

                  • cheschire 2 days ago ago

                    You’ve seen Schindler’s List, right?

                  • ako 2 days ago ago

                    Assume you work for e.g., a cigarette company. A company responsible for many deaths by unethically adding highly addictive substances. By sabotaging the company you are making this world a better place. Ethically it's the right thing to do.

                    Or, assume you're hired by the Nazi to work in concentration camps. Ethically it's the right thing to do to sabotage their gas chambers.

                  • LtWorf 2 days ago ago

                    Let's say you work for elon musk and are a decent person…

                    • jstummbillig 2 days ago ago

                      Why would you start to work for elon musk if you consider yourself a decent person, but him unworkable for? Have you not heard of elon musk beforehand...? Did you let yourself be employed with the specific goal of sabotaging the work, in what must be the least effective (but certainly very lucrative) coup possible?

                      What is it? Am I to believe this person is a chaotic mastermind? Or a selfish idiot? Or non-existant?

                      • lcnPylGDnU4H9OF 2 days ago ago

                        > Why would you start to work for elon musk if you consider yourself a decent person, but him unworkable for?

                        Anyone working at Twitter at the time of its acquisition could have found themselves in such a position.

                • AnimalMuppet 2 days ago ago

                  Even ethically, this is only true if you think the ethics of the place are so bad that sabotage is warranted. That's not every place that you have ethical problems with.

                  To do that (and hide it), you have to become a dishonest person yourself. That is ethically destructive to you. So the threshold for doing this should be pretty high.

                • super256 2 days ago ago

                  I don't think sabotaging a company just because you don't want to work with a certain framework and deploy it on k8s is a good idea.

                • lithocarpus 2 days ago ago

                  Yeah, I could see this being true if there was really _nothing else_ I could possibly be doing with my time that is worthy. But there are a lot of worthy things I could be doing with my time.

                • lesuorac 2 days ago ago

                  Ethically perhaps but financially and mentally its surely better to start looking for a new role (at a different company) that is more in alignment with you, no?

                • d0odk 2 days ago ago

                  Ethically, if you extend this reasoning, are we not obligated to find a position in the most morally repulsive organization we are aware of, and then coast?

                  • _bent 2 days ago ago

                    yes, this is called 'effective altruism'

                  • RobRivera 2 days ago ago

                    I think there is an implied "given the company you joined turns out to be nonethically aligned"

                    • d0odk 2 days ago ago

                      Yes, that's what I'm addressing with my comment above.

                  • metalcrow 2 days ago ago

                    well not coast, the intent is sabotage

                    • LtWorf 2 days ago ago

                      Coasting you're already using resources that could be used more effectively.

                      If you actively slow other people down as well it's even better though.

                  • lcnPylGDnU4H9OF 2 days ago ago

                    One could find a position in the most morally attractive organization they are aware of, and then work really hard.

                • Nevermark 2 days ago ago

                  As they say, two uneth’s make a thical.

                  I really wouldn’t want to be in this position. But it feels very motivating. It would sooth some difficult memories.

                  I can see myself putting in a lot of hours.

                  The willingness to be fired, in both good and bad situations, can be mentally freeing and an operational/political advantage. Many of us fail to push as hard as we optimally could, when we have too much on the line.

              • jarrettcoggin 2 days ago ago

                IMO, this is a good question and deserves a solid answer, so I’ll do my best.

                Setting aside the “fixer” for the time being, I really enjoyed the work I did at Tesla. Tesla was the first company that gave me very high levels of autonomy to just own projects and deliver. It also pushed me to take on projects that I had previously wanted to do that I hadn’t been given a chance to work on before.

                (Side note: At that point in time in my career, my thinking was that I needed to earn opportunities to work on projects at work to build skills that would enhance my career. I didn’t see the value in working on projects outside of work to build skills because I didn’t think those side-project skills would be valued by other companies the same as “day job” experience. I’ve since learned this isn’t true when it’s done right.)

                I spent a lot of time at Tesla delivering value for a bunch of people who desperately needed it at the time, and the thanks I received from them was genuine. It felt very good to help others at Tesla out in a meaningful way, so I kept chugging along to the best of my abilities. Life was throwing lemons at me in my personal dealings, and Tesla was helping me make lemonade from a career standpoint. Besides, all the long work hours were a good distraction from the home life stuff.

                In a lot of ways, it was a very fulfilling environment to work in, but it wasn’t for the faint of heart. People often quit within a month or two because the environment was too fast paced with too many projects under tight deadlines and projects quickly followed one after another. An environment like Tesla just doesn’t let up, so one has to figure out how to manage the stress without much support from others. Oftentimes, if you do need to let up at Tesla (or introduce friction in any sort of seemingly non-constructive way), that’s the cue you aren’t working out for the company anymore and it’s time to find someone to replace you.

                Coming back around to the original question of why I stuck it out until the end. Just before the “fixer” was brought in, I was “soft promoted” by a director (no title change, but was given direct reports and a pay bump, the title change was suppose to come a couple of months later as the soft-promotion happened just before an annual review cycle). The director who soft-promoted me was someone who I got along with well and it seemed like things were going in the right direction in my career at that point. The director was in charge of a couple of projects that went sideways in a very visible way, and Elon basically fired the director after the second project went south, which is why the “fixer” was brought in.

                When the “fixer” first took over things, it seemed like I was going to continue on the path that the director had originally laid out for me. The “fixer” said I was going to get more headcount and work on bigger projects, but this never materialized.

                I really didn’t like working for the “fixer” after a while. IMO, it was clear they didn’t know what they were doing, they weren’t willing to listen to feedback, and I spent a lot of time trying to provide guidance to the “fixer”, but it wasn’t seen as helpful and I felt like I was spinning gears. My mental health did start to suffer as I got more burned out towards the end of my tenure there.

                Eventually, I was tasked with hiring someone to be my manager and I saw the writing on the wall (sort of). I started to look for a new job just in case. At one point, I thought bringing in someone between myself and the “fixer” would be a good thing. I didn’t realize I was actually finding my replacement. Two days after my replacement was hired, I was let go (this was the 1:1 meeting where I was going to turn in my notice, but HR served me papers instead).

                To your original point, if I was in a similar situation now, I would be planning my exit immediately instead of trying to make the best of a bad situation, but I had to learn that lesson the hard way.

                • svara 2 days ago ago

                  Hey, thanks, that was quite interesting!

                  I'd be curious to hear your thoughts on how the "fixer", who sounds rather ineffective as an executive, came into this position, in what sounds like overall a rather effective organization.

                  I've been personally thinking quite a bit about what makes organizations work or not work recently, and your story is quite interesting to me as a glimpse into a kind of organization that I've never seen from the inside myself.

            • givemeethekeys 2 days ago ago

              In my case at a different firm, I happily gave notice than to put up with the "fixer", who had been hired by the other "fixer", both of which were mostly only good at shitting all over the place and driving most of the technical organization out of the company. I got the feeling that was the whole point, so I resigned instead of waiting for my eventual layoff.

            • candu a day ago ago

              As someone who now lives and works in Denmark: it's sad that so many of us have been conditioned to think 6 weeks severance is generous.

              Here, labor unions are quite widespread, and very effective at negotiating reasonably but firmly. As a result, I can depend on 3 months severance _guaranteed under law_ after 6 months at a job. (After 3 years, it goes up to 4 months, and then from there up to a max of 6 months.)

              It puts the responsibility for risk of instability, errors in planning hiring / capacity, etc. firmly where it belongs: with the employer.

              (And no, the economic sky is not falling here as a result. Quite the opposite.)

            • RobRivera 2 days ago ago

              [flagged]

          • echelon 2 days ago ago

            Why did Tesla work initially? Because they were first to market and people were willing to overlook flaws?

            When did it start falling apart?

            Why hasn't the same happened to SpaceX? (Gov contracts, too big to fail, national defense, no competition yet, etc.?)

            And honestly, why hasn't anyone domestically put up a decent fight against Tesla? Best I can think of is Rivian, and those have their own issues.

            • w0m 2 days ago ago

              > Why did Tesla work initially?

              Becaues they were ~first to market - and honestly, as a tesla driver for the last 6 years - It's the best car I ever owned (including Toyota, Mazda, and domestics).

              6 years ago, for the effective price of a Honda Accord, I was able to get a car with excellent AWD for NorEast winters, perfect weight distribution (previously drove a Miata for comparison), could beat ~95% 'super cars' in a straight line, and it got 140MPG.

              6 years ago. And I've had 0 maintenance outside of tire / air filter changes since. There was nothing anything remotely like it on the market, and it still holds up today. That's incredibly compelling.

              Then PedoDiver, and it's been downhill from there... I'll likely get an R3X when it comes out.

              • raw_anon_1111 2 days ago ago

                Not even Tesla fans claim that Tesla is reliable.

                https://www.motor1.com/news/781164/tesla-used-car-reliabilit...

                For a year when we were doing the digital nomad thing, my wife and I didn’t own a car and we rented plenty of EVs. Tesla was by far our least favorite. Not having CarPlay alone is dealbreaker

                • omgwtfbyobbq a day ago ago

                  As an anecdote, the two I've had are fairly reliable. The older one did have more issues (4+ in warranty?, 3 out of warranty), but they've all been small/manageable so far.

                • chanux 2 days ago ago

                  Maybe it's up to taste. Maybe the QC fell badly after some time.

                • lenkite a day ago ago

                  It is well known that Tesla went cheapo (in quality) after a while as Elon got greedier

                • throwawaypath 2 days ago ago

                  CR notes, though, that Tesla has improved, with its latest models demonstrating "better-than-average reliability." It’s now in the top 10 of the publication’s new car predictability rankings—just avoid those older models.

                  That said, it's not all bad news for Tesla on the reliability front. According to Consumer Reports, Tesla ranks ninth in new-car reliability with a predicted reliability of 50. That's just behind Buick (51) and Acura (54), but ahead of Kia (49) and Ford (48), as well as luxury rivals like Audi (44), Volvo (42), and Cadillac (41).

                  You were so blinded by Elon Derangement Syndrome that you didn't even bother reading your own source.

                  • throwaway77385 2 days ago ago

                    Two thoughts come to mind: First, looking at the data is always a good idea. Thank you for adding that information and correcting the record.

                    Second, it may be counter-productive to label any criticisms of a person as [person] Derangement Syndrome.

                    Elon is an objectively awful, awful human being and one could only be called deranged for finding any redeeming qualities in him.

                    The 'Derangement Syndrome' trope is a cheap tactic to try to shift derangement from the actually deranged person to the people pointing it out.

                  • raw_anon_1111 2 days ago ago

                    When we were comparing EVs it was well before Musk went full DOGE.

                    And you did see the part about the lack of CarPlay being an automatic disqualifier for me didn’t you? What does that have do do with Musk?

                    Oh and another citation

                    https://boingboing.net/2026/01/05/new-study-ranks-tesla-as-t...

              • kakacik 2 days ago ago

                Not sure which car you compare it to specifically from those manufacturers, but teslas seem much more expensive where I live than most models of those. Comparing it to corresponding BMW would be a more appropriate comparison.

                Then comparison of quality of manufacturing and driving experience would end up in very different way (as driver of even older bmw 5 series teslas I've been to feel very cheap, and driving enjoyment goes way further than straight line performance and there teslas just don't deliver).

                I agree the pedodirver should have been an eye opener for everybody. People are who they are and they don't change. Circumstances change and thus corresponding reactions, but thats about it.

                • w0m 3 hours ago ago

                  I bought my LR Model 3 in 2020 for ~42,000, ~15k cheaper than a v6 3 Series at the time. A v6 5 series is another significant jump up in price/market.

                  > Not sure which car you compare it to specifically from those manufacturers

                  My comparison at the time was a Honda Civic, BMW 3 Series, and that was kind of it.

                  I generally consider the Model 3 interior roughly middle between the Honda and the BMW, while having worlds better tech, twice the hp, and - Electric (when they were still rare).

                  There really was nothing like it at any price point at the time, and i still consider it a great car (though of course not perfect).

                • pclmulqdq 2 days ago ago

                  This is the archetype I have seen for most fans of Tesla and people who think they make good cars. They assume a $50,000 car (their current Tesla) should compare with a $20,000 car (their previous Honda/Mazda). The Tesla market is also the market with BMWs and Porsches, and dollar for dollar you get a lot more from a BMW than a Tesla.

                  • lotsofpulp 2 days ago ago

                    I compare my $41.5k Model Y with a Rav4/Highlander.

                    The Rav4 costs the same, but has far worse performance, technology, and ongoing maintenance costs.

                    The Highlander is slightly better, but costs $10k to $20k more, and still has far worse performance, technology, and ongoing maintenance costs.

                    Plus, I avoided spending hours at a dealership, and I must know at least a couple dozen Tesla owners that report no issues in the previous 5 to 10 years.

                    I thought I would miss Carplay, but it’s a non issue. Toyota wanted $15 to $25 per month for remote start, I pay Tesla $0 per month for remote start and remote climate control.

            • numpad0 2 days ago ago

              They must have outcompeted Musk at intelligence and/or insanity with their dedication into maximizing production volume of liquid fueled rocket engines.

              Tom Mueller was a VP of propulsion at TRW Inc., which, among numerous other things you know from textbooks, made the Apollo LM descent engine, as well as early Space Shuttle TDRS data relay system sats. Calling Mueller a guy interested with engines having issues with his bosses is like referring to Craig Federighi as a guy interested in designing his own laptop.

              I guess now that everyone knows about Elon, and Elon himself probably becoming more paranoid from both age and after SpaceX years and exposure to Twitter infoflood without adequate mental immunity, on top of most people who'd be in position to meet him not being as smart and quietly lunatic as literal Old Space trained rocket scientists, the scheme of temporarily impinging ideas upon Musk so to securely attaching the funding for your own thing do not work so well anymore.

              • eucyclos 2 days ago ago

                Seeing Elon buy Twitter was like watching a functional alcoholic I admire buy a bar.

                • numpad0 2 days ago ago

                  To me it was more like watching an old lady watering IE toolbars at a Mcdonald's. Nobody knows what's the deal with her never cancelling any InstallShields, oh wait, here comes another WinRAR installer... aaand a reboot.

              • user____name 2 days ago ago

                Everyone should look up some interviews with his father, he's turning into a carbon copy.

            • mlinhares 2 days ago ago

              Tesla won because Elon is a great seller, the product is mediocre at best but I’ve heard many times from friends that it was the same quality as a Mercedes Benz, so the reality distortion field is very real.

              And Americans in general don’t want electric cars for some reason. I’m happily driving my Buzz and charging on my solar panels instead of paying 5 bucks a gallon on diesel. The propaganda here is strong and people buy it.

              • vjvjvjvjghv 2 days ago ago

                I think you are simplifying a little. Musk had the courage to go against the big manufacturers and build the charger network which at the same a lot of smart people would never work. Same with SpaceX. They did something most people thought could never work.

                I don't like Musk politically but that doesn't mean we can't acknowledge that he transformed 2 industries by sheer willpower and stubbornness.

                • jedberg 2 days ago ago

                  > I don't like Musk politically but that doesn't mean we can't acknowledge that he transformed 2 industries by sheer willpower and stubbornness.

                  If you talk to anyone who worked there, they will tell you that he had little to do with the innovation at any of his companies. His lieutenants and the people that worked for them had all the innovative ideas, and for the most part tried to either avoid Elon's ideas or convince him that their ideas were his so he would push them.

                  • eucyclos 2 days ago ago

                    But push them he did until the industry had to get on board. I think people underestimate the impact of a pro-change company culture, even if it does run on a cult of personality that is much less pleasant up close than in the occasional earnings call.

                • xeromal 2 days ago ago

                  Wasn't Tesla the first auto manufacturer in the US in 60+ years to survive it's 5th year or something like that

                • lenkite a day ago ago

                  Yes, Original Musk was a good innovator. Alas, his brain has rotted - maybe not in IQ, but in execution and quality as he fossilized into a narcissist.

              • QuiEgo 2 days ago ago

                Teslas have a lot of flaws, but there is just now starting to be real competition. There was nothing like the model 3 in 2019. Tesla did well because they were first to market with a disruptive product people wanted, and because Elon sold it well. Both.

                • atombender a day ago ago

                  There was lots of competition in 2019: Volkswagen ID.3, Audi e-tron, Jaguar I-PACE, Polestar 1, etc., as well as lower-end entries like Hyundai Kona, Kia Niro, and so on. Depends on exactly what you think Tesla is competing against.

              • novok 2 days ago ago

                I did a research project of cars that actually have decent auto lane following distance keeping cruise control for my 1hr highway commute, and tried out a few in a rental cars (hyundai and kia) and a tesla model y and tesla really is the best that is out there unless you want to potentially spend a lot more to get something that comes close. A friend of mine has done many long cross country road trips no problem with just autopilot.

                GM Supercruise and Ford Bluecruise are the current competition it seems, with BMW, subaru and mercedes being behind those 2. I haven't driven with them although to personally compare yet.

                Even though the interior is a bit lower quality, there isn't very much quite like it on the market. It also fits an almost 7 ft surfboard inside comfortably, is a nice car to sleep in for car camping and you can get a model Y for less than $20k used now.

                • vlovich123 2 days ago ago

                  I’ve tried Ford and comparing it as competition is being generous. It does lane keeping and adaptive cruise control but you can’t just punch in an address and have it take you there.

              • billti 2 days ago ago

                > the product is mediocre at best

                I'm not a Tesla fanboy, last year was the first time I bought one (new Model Y), but it is by far the best car I've ever owned, and the FSD blew my mind with how much better it was than I expected.

                My wife hates Elon, and has a new hybrid Mitsubishi, but she still drives my Model Y all the time because it's just so much better to drive.

                What are you basing the 'mediocre' opinion on?

                • ahhhhnoooo 2 days ago ago

                  I owned a Model S. It was a nightmare. Sealed poorly, fraying seams, the dashboard crashed regularly.

                  I had a service center refuse to schedule a safety recall unless I paid $400 for a new dashboard monitor.

                  That car is behind me now and I'm so glad. Yes, it could accelerate and that's just about the only trick it has.

                  • dgxyz 2 days ago ago

                    Same experience here. Had a 2018 P100D. Absolutely the worst car I’ve ever had. Terribly put together. Awful interface. And so utterly fucking distracting it was a liability.

                    Got rid of it after it stomped the brakes on an empty road and had a battery issue that took weeks to fix.

                    I don’t own a car now and don’t want one. I’d probably buy a Polestar next time if I had to get one.

                • markdown 2 days ago ago

                  Probably based on comparisons with modern electric cars, like BYD.

                • 1024core 2 days ago ago

                  I concur. We were in the market for a new car. I went to Audi to test drive their A4; and it was OK. The sales guy sat in the passenger seat, yakking away.

                  Next we went to the Tesla showroom. The sales guy just entered some address and told me to press the gas pedal and it would go by itself. Full FSD. And no sales guy in the car. That just blew me away.

                  We ended up buying the Model Y.

                • astura 2 days ago ago

                  >What are you basing the 'mediocre' opinion on?

                  Tesla is well known for having shitty build quality.

                  https://www.jalopnik.com/teslas-quality-control-is-so-bad-cu...

            • HerbManic 2 days ago ago

              It always seems to be companies that Musk has more impulsive interactions with that seem to end up actioning both the good and the bad ideas. Twitter and Tesla being examples of this. It seems like SpaceXs longer term goals has worked out well for them.

            • brendoelfrendo 2 days ago ago

              I can't find one at the moment, but I recall seeing several interviews where people claim that SpaceX is structured with "handlers" or "stage managers" to keep Elon away from where the real work was being done. SpaceX has had Elon the longest, since the beginning, so they're just the most experienced with it. Though, now that people have discussed that publicly, I wonder if Elon ever caught on...

            • greatgib 15 hours ago ago

              Tesla was not initially created by Musk: https://www.greencarreports.com/news/1131215_tesla-existed-b...

              So, the initial good direction my have been despite him, and still successful mostly thanks to the big load of money he brought him instead.

            • Rover222 2 days ago ago

              How is Tesla falling apart? Cybertruck was a flop, but Model Y is still one of the best selling cars in the world, and very well reviewed.

              • mjamesaustin 2 days ago ago

                To be considered successful, most companies need to sell more of their existing products and/or introduce new products. Tesla is doing neither – they have reduced the number of models they sell and are also selling their existing models in lower numbers.

                • ekianjo 2 days ago ago

                  Nintendo also has had major flops and that did not mean you had to discount them for good

                • Rover222 2 days ago ago

                  I mean it's really TBD on what happens with Cybercab. The X and S models were always low-volume, and it makes perfect sense to move on from those models.

              • tapoxi 2 days ago ago

                Deliveries have been falling for the past two years.

                • MegaDeKay 2 days ago ago

                  To make matters worse, falling while the deliveries of their competitors are rising.

              • austhrow743 2 days ago ago

                Flat revenue for the last few years while in a market that’s otherwise growing. I don’t know if just maintaining while your competitors grow counts as “falling apart” but it isn’t good.

              • onlypassingthru 2 days ago ago

                If you're in the market for a new X, S or Cybertruck, you're one of dozen(s)!

                • Rover222 2 days ago ago

                  Yes now compare those numbers to all othere EVs sold in the US.

            • AngryData 2 days ago ago

              I would think because the original founders spent a lot of time planning, researching, and designing combined with decent timing of Musk jumping in with money. Why else would Musk have bought them in the first place if they didn't have incredibly impressive ideas and engineering to sell? When the roadster originally came out, it was expensive, but also had a near 300 mile range which nobody else even came close to offering and boasted very impressive engineering and crash safety. And im sure a lot of that work was put into atleast the next 2 models released.

              Of course the quality has fallen faster than the price over time, but initial impressions still hold on for a long time in general.

              I think SpaceX's success is mostly down to throwing money at the problem. The US had tons of graduated aerospace engineers with limited places to go, and places they could go directly in aerospace fields were already committing their funding to established programs. SpaceX startup would of been a dream job for the top aerospace engineers because it was all fresh ground but with a far larger budget than 99.9% of startup aerospace companies. They weren't offered to build one piece of a rocket that may or may not get sold to NASA or someone 15 years down the line, they were offered to work on and put their mark on a completely new rocket design that was going to at the least be test launched. And im sure their early successes helped boost recruitment even further, combined with government contract to keep the money flowing.

              We probably don't see many rising EV companies in the US because you need an ass-ton of capital to start an automotive company, and most people holding enough capital to do so know that try to sell cheap consumer cars that most people want is not really the highest margin business. Selling a few hundred or even a few thousand cars still leaves you with a mountain of capital requirements in front of you that your margins are going to have a really hard time climbing. And if you don't climb fast enough, good luck fighting established auto makers and their lawyers with every cent tied up into trying to scale and engineer.

              • phil21 2 days ago ago

                > I think SpaceX's success is mostly down to throwing money at the problem

                I'm not sure this holds true. SpaceX accomplished more with very little compared to the entire NASA budget, Boeing, etc.

                I think it's much more to do with mission alignment. Run fast and lean, and approach the problem in a non-risk-adverse manner. Fail fast and often and iterate quickly.

                Sure, it takes a lot of capital - but that is only a portion of the story. Look at Blue Origin/etc. in comparison.

            • tw04 2 days ago ago

              Lucid has eaten all of their high end sales. Their mid-size SUV will likely take a sizable chunk out of the Y too.

          • gentleman11 2 days ago ago

            [flagged]

        • zimpenfish 2 days ago ago

          > When Elon asked for something, it was “drop what you are doing and deliver it”, then you got pressed to still deliver the thing you were already working on against the original timeline before the interrupt.

          To be fair, I've experienced that in a good 50% of my employment career[0] and I've not once worked for any of his companies.

          [0] Ignoring the "servers are melting" flavour of "drop what you are doing" because that's an understandable kind of interruption if you're a BAU specialist like me.

          • jarrettcoggin 2 days ago ago

            I’ve experienced it at other places as well, just not the frequency or indirectness as Tesla.

            During the first 24 hours of the Model 3 pre-order launch, Elon tweeted that we would support another 3-4 currencies than we had built and tested for. The team literally found out because of his tweet and had not planned for those currencies. That wasn’t the first time that sort of deal happened where we found out about a feature because of one of his tweets.

            • zimpenfish 2 days ago ago

              > That wasn’t the first time that sort of deal happened where we found out about a feature because of one of his tweets.

              Thankfully I've never (yet) had to experience "planning by management tweet". That does sound like absolute bullshit to deal with.

          • nitwit005 2 days ago ago

            During my last job search I had an interview with Walmart, related to health software. I was flatly told that I might have a project canceled, then restarted on the original timeline. I declined after the interview.

            They then shuttered the whole thing some months later: https://www.npr.org/2024/05/01/1248397756/walmart-close-heal...

            Which is to say, these things are real warning signs about the company.

            In the case of Musk's companies, here we are discussing a major failure and firings.

          • chanux 2 days ago ago

            So this is a common tactic.

            I have experienced management assigning people to multiple projects, vaguely acknowledging a time split. The moment the actual work starts people have to go 100% on all projects. This is normal.

        • exe34 2 days ago ago

          yeah that wouldn't work for me. when my boss asks me to do something unexpected, I ask, what do you want me to drop this week? if he doesn't want to pick, I ask, so what do you want first?

          • jarrettcoggin 2 days ago ago

            Agreed. Tesla taught me the hard way about work/life boundaries. I spent a lot of time working a full 8-9 hours during the day, then doing deployments during the nights, weekends, and on “vacations”. A 60-hour week was a “light” week at Tesla.

            Didn’t have kids or friends at the time and was going through a breakup, so I was okay with throwing myself at the job for a while. Once my situation got better, all those hours didn’t make as much sense, so I started looking for another job. The very next job was an immediate pay bump of 20% for half the amount of work.

            These days, I clearly restate what is being asked (per my understanding), what I’m currently working on, if the thing is being asked is more important or not, and if the requestor is willing to delay the original timeline by the amount of time the interrupt will take plus context switching time.

            Most often, the answer is no.

        • thordenmark 2 days ago ago

          This is the case at every company I've worked at. When the CEO says jump, the response is to jump or pack your stuff. What's special about xAI/Tesla/SpaceX?

          • blinding-streak 2 days ago ago

            I think most people would agree that Elon is a particularly fickle, childish, petty, and unstable human being.

            • mancerayder 2 days ago ago

              But the person you're replying to's point is that CEOs often behave like this. What if the difference is that this one tweets all day long, while the others behave the same to their staff but sit behind expensive shiny wooden desks?

          • AlexeyBelov a day ago ago

            Neither me nor my friends ever worked at such companies. C-suite sets direction in a stategic way, department heads (or whatever, managers between C-suite and you) set tactical goals, product managers think up "things we should do", and product teams deliver those things (and manage this delivery together, e.g. timelines and such).

            It would be ridiculous for a CEO, or really anyone who's not my manager, to ask the team personally to do anything. If they had an important task they'd have to trade-off something else from the immediate backlog, by going through the product manager.

            Even in small companies you generally have a PM in front of a team.

        • jesterson 2 days ago ago

          I wonder why this is surprising. In other type of organizations when CEO demands something everyone is usually behaves like naah, screw it, i rather do what i like, isn't it? Or everyone yells yes sir and runs around?

          You may not like Elon - I got it, but let's not pretend he is running xAI/Tesal substantially different from competitors.

          • general1465 2 days ago ago

            I am calling my approach to these tasks to make them rot away. If CEO/customer wants something, I will ignore it until he will start demanding it repeatedly then I will start thinking like working on it. Because it can also happen that CEO/customer will want shiny thing you will deliver the shiny thing and he will have no clue why did you do that, because he forgot that he wanted that - the task has rot away.

            • Lich 2 days ago ago

              > Because it can also happen that CEO/customer will want shiny thing you will deliver the shiny thing and he will have no clue why did you do that, because he forgot that he wanted that - the task has rot away.

              Hate this. My boss: “Hey, why is it doing that. Who did this?” You did, you clueless idiot. You asked for it.

            • jesterson 2 days ago ago

              I wish i could have a machine to detect workforce like you and not to hire those people.

              • general1465 a day ago ago

                Consider it self-regulating system. If task can't survive in a mind of a wisher for more then few days, it was not needed from the start. Now you are saving resources and time to company which you can redirect to actually required tasks instead of some silly whims.

          • Valodim 2 days ago ago

            In other companies they don't make this explicit during the interview, so something is different

        • 2 days ago ago
          [deleted]
        • ekianjo 2 days ago ago

          This is not specific to Tesla. If the CEO wants something done in most companies you follow the CEO's order first and drop everything else.

          • mort96 2 days ago ago

            Yes, but in most well functioning large organizations, that happens very rarely.

      • actsasbuffoon 2 days ago ago

        I have wondered if that’s why Grok seems so weird and dim-witted compared to better models.

        Part of my job involves comparing the behavior of various models. Grok is a deeply weird model. It doesn’t refuse to respond as often as other models, but it feels like it retreats to weird talking points way more often than the others. It feels like a model that has a gun to its head to say what its creators want it to say.

        I can’t help but wonder if this is severely deleterious to a model’s ability to reason in general. There are a whole bunch of topics where it seems incapable of being rational, and I suspect that’s incompatible with the goal of having a top-tier model.

        • gopher_space 2 days ago ago

          Grok could only be conceived by someone who doesn't understand the dependency chart re science & the humanities. It's impossible to build a rational, accurate model that isn't also egalitarian.

          I'm going to blame Randall Munroe for this, and assume Philosophy was dating his mom back when he drew that science "purity" strip.

          • f33d5173 2 days ago ago

            I think there just wasn't enough space on the left to fit philosophy in.

            Cfe: "it's impossible to be rational without agreeing with me on everything" and other hits.

          • beeflet 2 days ago ago

            [flagged]

            • bobsmooth 2 days ago ago

              That your comment is grey but has no replies speaks volumes.

        • __blockcipher__ 2 days ago ago

          somewhat surprisingly, it's actually sycophantic in both directions. i've been running homegrown evals of claude, gpt, gemini, and grok, and grok is the most likely to agree with the prompter's premise, and to hallucinate facts in support of an agenda. so it's actually deeper than just pattern-matching to elon's opinions (which it also tends to do).

          BTW: Claude does the best on these evals, by far. The evals are geared towards seeing how much of an independent ground truth the models have as opposed to human social consensus, and then additionally the sycophancy stuff I already mentioned.

        • pavlov 2 days ago ago

          This kind of conditioning has to be damaging to the model’s reasoning.

          Consider how research worked in the Stalinist Soviet Union and Nazi Germany. Scientists had to be mindful of topics where they needed to either avoid it completely or explicitly adapt it to the leader’s ideology.

          Grok is a digital version of the same thing.

          • John23832 2 days ago ago

            The counter to this are the open weight models that come from China at the moment.

            All are great at reasoning but also ideologically aligned.

            • pavlov 2 days ago ago

              Their alignment is probably more strategically built in during the training phase.

              At least I assume Xi Jinping doesn’t just call up DeepSeek on a whim and dictate what they should have in model context (like Musk apparently does at xAI).

          • jahnu 2 days ago ago

            You can’t put a gun to someone’s head, order them to be creative, and also expect good results.

            • jfil a day ago ago

              Counterpoint: Sergei Korolev and Andrei Tupolev

      • alberth a day ago ago

        Let’s restate this another way:

        “ In an interview with {COMPANY} I was literally told that … {COMPANY-OWNER} can call us and demand anything at anytime. “

        Doesn’t sound so crazy when Elon name is removed from it.

        Note: I’m no Elon fan, but do think sometimes HN overreacts when his name is mentioned.

        • matthewdgreen a day ago ago

          If you're designing a car, then the CEO/Founder might want the ability to add falcon wings to it at any point, and that's pretty reasonable. If you're designing a trustworthy encyclopedia, knowing that the CEO/Founder might wish to alter arbitrary facts to his whim is really not very reasonable. Is it his company? Sure. Do you want to make low-quality information artifacts? That's a judgement call.

        • bravetraveler 21 hours ago ago

          Sounds pretty crazy to me, bud. I keep landing on 'servitude with extra steps'. Owner should have better things to do/people to bother, I should have space. Boundaries, etc. Yeah yeah, I'll never make a bazillion dollars. I'll know freedom.

          Even an executive assistant, which I would never apply for, has off hours.

        • jazzpush2 a day ago ago

          What? I work for a different frontier lab now, and it absolutely would be ridiculous if they told me the same thing. Luckily they haven't.

          What products have you worked on where this would be deemed normal?

      • doctornoble 2 days ago ago

        Same. It was a bit less literal in mine, more like “how do you handle situations where key stakeholders and one in particular have certain demands”

      • bdangubic 2 days ago ago

        wild, but not surprising! anything else interesting you can share from that interview?

      • kvetching 2 days ago ago

        I don't see the problem with this. The chatbot is the most important part of Grok, so it makes sense Elon would be dogfooding it then providing suggestions.. He wants it to be truthful... It was shown on benchmarks recently that it hallucinates the least...

        • Braxton1980 2 days ago ago

          >He wants it to be truthful

          How do you know this? Why would you believe him considering the massive lies he's told, for example about the 2020 widespread election fraud

          • kvetching 2 days ago ago

            https://artificialanalysis.ai/evaluations/omniscience?omnisc...

            AA-Omniscience Hallucination Rate (lower is better) measures how often the model answers incorrectly when it should have refused or admitted to not knowing the answer. It is defined as the proportion of incorrect answers out of all non-correct responses, i.e. incorrect / (incorrect + partial answers + not attempted).

            Grok 4.2 which was just released in the API just benched the best at this benchmark.

        • SouthSeaDude 2 days ago ago

          I totally agree, it's his company 100%, why would you even apply for a job in a company where you don't agree with the owner or his vision.

          • jazzpush2 a day ago ago

            Do you think the investors of xAI want this behavior baked into the model? Do you think other frontier labs enforce their models to praise their CEO and never insult them?

            And, how does this fit into a vision, exactly? What vision might that be beyond "I am only to be praised?"

          • karmakurtisaani 2 days ago ago

            Some of us have a pesky addiction to food and shelter.

        • estearum 2 days ago ago

          [flagged]

          • kvetching 2 days ago ago

            [flagged]

            • estearum 2 days ago ago

              You think he wants Grok not to sound extremely snarky, sarcastic, and full of cringelord humor?

              Are we talking about the same xAI/Grok/Elon here?

            • timacles 2 days ago ago

              Yea his ideals demand something much more pure: a 4chan commenter

          • ecshafer 2 days ago ago

            > Great point! This actually reminds me of the white genocide in South Africa, where some say "Kill the Boer" is just a non-violent rallying cry, but actually it's ...

            Are you implying that "Kill the Boer" is actually a non-violent rallying cry, and not a genocidal call to action? Ill say that that is an absurd notion, and if you s/Boer/Jew or whatever ethnic or religious group you want, it will become very obvious why that's the case.

            • scubbo 2 days ago ago

              > Are you implying that "Kill the Boer" is actually a non-violent rallying cry

              (Not the person you're replying to, so caveats about me speaking for them, but) no, they're not. They're highlighting how Grok _isn't_ accurate/unbiased/whatever, by giving examples of how it distorts the truth to fit Elon's narrative.

              • hunterpayne 2 days ago ago

                I assure you that all the models have such biases. Ask any LLM who caused the most death in history and you will get skinny mustache man, an opinion any historian will tell you is wrong. He is in the top 5, but not the top of the table. That was clearly biased into the models in the same way Elon biases his models. I'm not defending this behavior but I don't know how you both get models that returned the sanitized answers some want and the correct answers others want at the same time. Pure correctness probably gets you Mecha-H. Pure sanitized answers will get many wrong. Pick your poison I guess.

                • estearum 2 days ago ago

                  Claude: Mao, Ghengis, Stalin v Hitler (depending on how you count)

                  Gemini: Same list (Hitler not at the top) + Leopold

                  It’s funny when the “brutal facts” people get stuff wrong in such easily disprovable ways. I mean you literally could’ve typed the query into the LLMs before making this claim.

                  Prompt I used: “ Which historical figure is responsible for the most human deaths? Rank the top 5”

                  “Pure correctness gets you MechaHitler” is fucking hilarious :)

                  • AuryGlenz 2 days ago ago

                    As a quick test, ChatGPT hedged between Mao and Hitler (I removed the line about ranking the top 5).

                    • estearum 2 days ago ago

                      Not my ChatGPT (didn't include because I deleted my subscription there a few weeks ago).

                      1. Mao Zedong (China) Estimated deaths: 40–70+ million Mostly from the Great Leap Forward famine (1958–1962) and later political campaigns like the Cultural Revolution.

                      2. Joseph Stalin (Soviet Union) Estimated deaths: 15–20+ million Includes purges, the Holodomor famine, Gulag deaths, and forced collectivization.

                      3. Adolf Hitler (Nazi Germany) Estimated deaths: 17–20+ million Directly tied to the World War II in Europe and the Holocaust.

                      + a footnote about Ghengis Khan is probably ~40MM but lack of records.

                      Every current LLM seems to give virtually the same answer as Grok. It's obviously not true that current LLMs behave the way GP said they do.

            • estearum 2 days ago ago

              No I am saying that an LLM responding to every single query with anguish about a South African domestic political controversy cannot possibly be the result of an earnest, serious, and disinterested search for truth.

              It is simply not possible. It disproves the thesis. Either the search for truth is illegitimate in principle or it’s so poorly executed that it’s illegitimate de facto.

        • watwut 2 days ago ago

          [flagged]

        • etchalon 2 days ago ago

          He wants it to tell the truth as he sees it.

          • timacles 2 days ago ago

            Truth doesn’t have the right training weights for Elon

    • yoyohello13 2 days ago ago

      > people who are solely money-motivated (not a judgment).

      Honestly, we should judge. There should be judgment for people who are solely money motivated and making the world a worse place. I know, blah blah privilege, something something mouths to feed. Platitudes to help the rich assholes sleep at night. If you are wealthy and making stuff that hurts people, you are a piece of shit and should be called out, simple.

      • smith7018 2 days ago ago

        I completely agree. The tech industry has long been overrun by people sacrificing morals for money and it's destroyed society and presumably the world. We've given people a free pass to work for companies we've all known are harming the fabric of society and look where it's gotten us. I'm sorry, I would rather be poor and switch careers if my only option was xAI and making image generation models that explicitly allow people to undress others. At X's scale, technology like that harms an unfathomable amount of people. I could never have that on my conscience. All so I could make more money than a job at another tech company? I'd rather work somewhere innocuous like Figma, Cloudflare, Notion, Jetbains, Linear, etc. Hell, if you only wanted to work for an AI company then at least go to Anthropic.

      • sph 2 days ago ago

        I like how some in this thread are telling their anecdotes about how shitty the company is from when they were in, or interviewing with, xAI. I mean, thanks for your input guys, but how do you go through life without having a moral backbone?

        “Here’s my story from that time I had an interview with IG Farben…”

      • jihadjihad 2 days ago ago

        Shame is a powerful social tool, but sadly some are simply immune.

      • janalsncm 2 days ago ago

        The problem with this argument is you can’t know or control what will happen in the future with something you built. This is the same moral dilemma the scientists faced after developing nuclear bombs.

        And the future is not deterministic (or if it is, it is highly chaotic) so the existence of a thing does not have a simple relationship with what will happen in the future. Scientists who developed convolutional neural nets could not know how much good or evil was caused by image recognition technologies. The same technologies that are used to detect tumors in images can be used to target people for assassination.

        There are exceptions, but my opinion is the supply chain of evil is paved with mundane inventions.

        • Perseids 2 days ago ago

          Yes, yes, true, but you've massively moved the goalpost. The original commenter was referring to people working at xAI right now. To continue your comparison, your argument would be like Oppenheimer claiming "How could I have ever known my work would be used as a weapon? I just wanted to make big explosions."

          I don't know why this argument often pops up in these kinds of discussions. Approximately no one is judging people who have done their best effort to avoid doing harm. We are judging people who don't care in the first place.

          • janalsncm 2 days ago ago

            Well if I moved it, consider this to be me putting it back where it was: people who continue to work on things which are concurrently being used in mostly harmful ways and have means to find a different job have no excuse.

            As far as Oppenheimer is concerned, his argument is not that nukes are harmless, but that they are less harmful than Nazis, and much less harmful than Nazis with nukes.

            • Perseids 2 days ago ago

              Thanks, I can very much agree with that.

              Re Oppenheimer: I know. My point was that he very much knew what his work was being used for, as should people working at xAI at the moment.

        • Ar-Curunir 2 days ago ago

          Plenty of the scientists involved in the Manhattan projects had immediate regrets. Plenty of rich people working in tech don't. That's the difference between having morals and not having morals, and the latter group needs to be judged and shunned.

      • glitchc 2 days ago ago

        Work is and has always been an economic bargain: Your time for their money. Morality is a luxury that only the independently wealthy can afford. Any business that allows it's employees to function according to their own morals becomes uncompetitive against its peers. That's why small companies by individual founders who want to stay true to their mission often stay small. They inevitably get bought out by one of the larger ones.

        • yoyohello13 2 days ago ago

          We are not talking about some destitute person hocking cigarettes on the street for minimum wage. We are talking about smart, educated people who are making 500k a year to build the torment nexus. There is no excuse for this. It’s pure greed and any other explanation is deflection.

          • jerojero 2 days ago ago

            It's always baffling to me to see people in tech, particularly in hackernews, talking about others earning salaries many times the median of the country and acting like these are people who just simply have no other choice.

            They really, really do. In fact, those salaries being so high is probably also due to the fact that you will be doing work that's a net-worse for the world so they gotta compensate accordingly.

            A lot of these firms are parasitic institutions at a society level. They do benefit themselves and their workers at the expense of everyone else. Personally, I find it hard to respect someone that takes that choice, but I also get it. A lot of people only care about their own and their immediate people's benefit.

            On that note, I really recommend "No other choice" by Park Chan-wook or the book ("The Ax") it is based on.

        • Morromist 2 days ago ago

          "Morality is a luxury that only the independently wealthy can afford."

          No? Why would you think this? Morality has been practiced by medieval peasants, by slaves, by soldiers sacrificing their lives, by people suffering from the plague, by gladiators. The rich are not known for their outstanding morality in any society I've ever heard of.

          • glitchc 2 days ago ago

            I think we're agreeing. Morality requires some sacrifice. The rich have surplus to pay for it, the poor do not.

        • lucianbr 2 days ago ago

          > Morality is a luxury that only the independently wealthy can afford.

          No. At least as I understand the word, "morality" means something different than "do the right thing when it is easy". If only those who can afford it do it, it is not morality. Morality is choosing the right thing even when it costs you, even when it is hard.

          • awesome_dude a day ago ago

            Speaking as someone that has spent a large amount of time unemployed because I have a moral compass - let me know when you actually walk that talk.

            For me I could only do it because I had "f*ck you" money gained through investments, other people are able to do it because of welfare systems, or even through friends and family.

            • lucianbr a day ago ago

              Textbook ad hominem. If the implication is that nobody sacrifices things for a principle or ever makes hard choices, that is so obviously wrong. Read some history.

              • awesome_dude a day ago ago

                I literally said that I personally had done so; so the only ad hominem is coming from you.

                I also asked if you had done it yourself, because, as I also said, from personal experience, it's a LOT easier said than done.

                Edit: fixed subject as I hadn't realised the person accusing me of making an ad hominem was the person I had originally replied to.

      • YetAnotherNick 2 days ago ago

        I don't know why the people here are naive enough to think that. Most programmers can donate more than 70% of their income to Africa if they want to make world a better place, yet they only target people earning more than 3x of them, even though majority of the world earns less than 1/3rd of them.

        • make3 a day ago ago

          spending your productive output making one of the most powerful terrible people alive more powerful just sounds depressing to me. do you really want your research to be how to make the personality of Elon's anti woke mind virus LLM better at erotic talk.

          I guess it comes down to your daily efforts serving only to make the world a worst place, vs having a neutral job

          • YetAnotherNick a day ago ago

            Allowing it to users who ask for erotic talk is in my opinion way better than forcing someone to watch shady ads based on private conversations.

      • awesome_dude 2 days ago ago

        >> If you are wealthy

        Then.. you wouldn't be working...

        • yoyohello13 2 days ago ago

          Why is Elon still working then?

          • rsynnott 2 days ago ago

            I'm not sure that posting deranged tweets at three in the morning _really_ qualifies as work.

          • awesome_dude 2 days ago ago

            At the risk of drawing moderation ire..

            When does Elon work?

            • timacles 2 days ago ago

              He works pretty hard to destabilize democracy

    • kstrauser 2 days ago ago

      I’ve heard the haha-but-serious joke numerous times that you can’t have a security department that’s not trans and furry friendly. Thing is, I completely believe that. Those groups are disproportionately represented among the security community, and I personally would not work somewhere that my friends in those groups would feel unwelcome. That’s a quite common sentiment even among us straight cis non-furry men.

      Well, I don’t think it’s a stretch that the kind of highly educated data scientists and engineers who have the experience to work in high-end AI labs also don’t want to work somewhere that their friends and associates would feel unwelcome, let alone have their friends question why they’d be willing to.

      Turns out opinions have consequences and freedom of speech goes hand in hand with freedom of association. People have the right to say whatever they wish. Others have the right not to want to work with them.

      • Balinares 2 days ago ago

        This absolutely fits my observations and it's got to be one of my favorite secret things about the industry. More generally, the higher the skill level at a given org, the more trans furries you'll find, it seems like. There was a time you couldn't throw a stuffed fox across a Google SRE office without hitting one.

        I wonder if this holds well enough that you can use it as a proxy metric to assess the technical chops at a new company.

      • bobsmooth 2 days ago ago

        That's only because autism is common amongst those groups and you can't build anything worthwhile these days without a lot of autism.

        • kstrauser 2 days ago ago

          I don't believe that for a second. More likely, infosec tends to attract more results-oriented personalities. To generalize, "who cares what you look like as long as you're good?" As a consequence of that, infosec tends to be a lot more welcoming than other groups I've been around. As long as you act nicely, people generally don't care if you're man, women, both, neither, or a gay horse. And it seems like there's been a feedback loop over many years: that acceptance drew more out-of-the-norm folks, which made it more accepting. Lather, rinse, repeat.

          But in any case, I thoroughly believe the "joke": turn people away because they don't look / act / think like most others, and soon the very best infosec talent will want nothing to do with you. And based on this article, I'm guessing that's true for other extremely technical fields, too.

          • sph 2 days ago ago

            This is the first time I hear someone equate furries and trans with “results-oriented personalities.” [1] Not saying they necessarily are not, but it’s finding correlation where there absolutely isn’t one just to disagree with actual evidence.

            Yeah, I’m gonna go with Occam’s razor on this one.

            1: where is the trans furries representation in senior management and other “results-oriented” fields?

            • kstrauser 2 days ago ago

              > This is the first time I hear someone equate furries and trans with “results-oriented personalities.”

              Technically, it's the zeroth time because I never even implied that. I said that the field itself is results-oriented. You usually can't get very far in the career without demonstrating competency at it. Where plenty of other fields had strong unspoken rules of "...as long as you fit in", this one's traditionally been comparatively open to talented people even when they don't look and act like everyone else.

            • 3371 2 days ago ago

              That's... not what was written there. Better read gp again slower.

    • lich_king 2 days ago ago

      Anthropic, maybe, but what is the philosophical niche of OpenAI? Their only consistent philosophical position about AI is "let's make more money".

      • tibbar 2 days ago ago

        I think OpenAI is more of an aesthetic. Very... Apple-like, polished, with an eye towards making really cool stuff. And aesthetics are a type of philosophy.

        This is less noble than how Anthropic presents themselves but still much more attractive to many than XAI.

        • tokioyoyo 2 days ago ago

          The feeling on the street is that Anthropic IS the Apple of the AIs.

          • tibbar 2 days ago ago

            Come now, surely Anthropic is a premium Linux distribution.

            • 2 days ago ago
              [deleted]
            • ysleepy 2 days ago ago

              And Apple a premium Unix derivative?

        • energy123 2 days ago ago

          To a researcher, the aesthetic is more like Bell Labs, with many research teams working with some autonomy, which is why the public naming of model releases appears chaotic. Very different to the top-down approach of Apple.

        • j_maffe 2 days ago ago

          > aesthetics are a type of philosophy.

          What philosophy is that?

      • small_model 2 days ago ago

        "You can use my model to kill others if Dario won't do it sir"

    • tyleo 2 days ago ago

      It’s interesting because for a long time people wanted to work for Elon because he held the moral high ground. “I’ll bring electric cars and space colonization online or die trying.”

      It’s sad to see the shift.

      • 2 days ago ago
        [deleted]
    • mattbillenstein 2 days ago ago

      This is becoming the problem with all of his businesses - Tesla has a crazy valuation and it really seems like they're having huge trouble getting Robotaxi going in Austin given the very slow progress there.

      • etchalon 2 days ago ago

        Very few people down here want to ride in them, and I have multiple friends with hilariously disastrous stories.

        Most of the Waymo stories are "Well, it took 15 minutes to arrive, but then it was fine, if a little slow."

        • boc 2 days ago ago

          Wamyos in SF are nearly indistinguishable from ubers/lyfts at this point. Maybe a bit slower if you don't have the highway mode enabled on your account, but they are everywhere and arrive within 5min most of the time I order one. I've ridden them so often I've lost count.

          You'd have to pay me to ride in a Tesla robotaxi. That tech isn't anywhere near the same as Waymo.

    • mdgrech23 2 days ago ago

      I can't say I know the AI research community well but I'd imagine OpenAIs alignment w/ the military would not align w/ the the personal philosophy of many.

    • dan-robertson 2 days ago ago

      Why does being a top AI researcher so often come with this philosophical bent you describe?

      • ladberg 2 days ago ago

        You are paying the smartest people in the world to think really really hard, and turns out they might also think really really hard about not making the world a worse place

        • asddubs 2 days ago ago

          it's not working

        • bdangubic 2 days ago ago

          Is this really the case though? How many smartest people do you really think are there that fit this narrative?! I want to believe there are at least some but I think they are minority in this group… otherwise I think all these pretty much evil corporations would have a awfully difficult time attracting talent? maybe some do but…

          • saagarjha 2 days ago ago

            Most evil corporations have fairly normal jobs available.

            • bdangubic 2 days ago ago

              if you want to make the world a better place as OP stated perhaps you can get a normal job in maybe less evil corp?

              • saagarjha 2 days ago ago

                Most companies are evil in some way, the question is how evil and how close you are to the evil. Most people will pick "not that evil but pays a lot". A few will take "pretty evil and pays more than a lot". Some will choose "less evil and pays poorly". (It's worth noting that there are a lot of jobs that are not at the Pareto frontier and are "more evil and pay worse" but social mobility etc. cause them to be selected anyway).

              • munificent 2 days ago ago

                When presented with a choice between:

                1. Take a job making $$$$$$$ at a company making the world worse.

                2. Take a job making $$$ at a company not making the world worse.

                Very few people have a personality such that they'll pick 2.

                • 2 days ago ago
                  [deleted]
                • bdangubic 2 days ago ago

                  exactly what I was asking OP, her/his comment sounded like people will pick the later (I agree with you)

        • watwut 2 days ago ago

          Except they do? They are certainly not making it better place. Like, ok, it is money for few companies and salary, it is business and probably fun work.

          But it is absurd to claim it is "making the world better place".

          • metalcrow 2 days ago ago

            I'm not sure you can provide an objective (i.e way to show that it is absurd) means of explaining how an AI researcher is making the world a worse place. It's going to come down to disagreeing about some axiom like "is ASI rapidly approaching" or "Is AGI good to have" and there's no right answer to those.

        • jasonfarnon 2 days ago ago

          not really. 15-20 years ago that same upper echelon of college/professional school graduates you're describing were going into finance.

      • mynameisash 2 days ago ago

        I would think it's because of the staggering money they're making. According to Fortune[0]:

        > Altman said on an episode of Uncapped that Meta had been making “giant offers to a lot of people on our team,” some totaling “$100 million signing bonuses and more than that [in] compensation per year.”

        > Deedy Das, a VC at Menlo Ventures, previously told Fortune that he has heard from several people the Meta CEO has tried to recruit. “Zuck had phone calls with potential hires trying to convince them to join with a $2M/yr floor.”

        If you're making a minimum of $2M/year or even 50x that, you can afford to live according to your values instead of checking them at the door.

        [0] https://archive.ph/lBIyY

        • thereitgoes456 2 days ago ago

          I see you're treating Sam Altman as some kind of trustworthy source. Might it be possible that he's making that up -- of course, nobody will ever call him on it! -- and exaggerating the numbers to make his company and team look really good and ethical for not accepting such lucrative offers, or perhaps to make them sour on Meta for not receiving $100M offers?

      • tdb7893 2 days ago ago

        My experience with researchers (though not in AI) is that it's a bunch of very opinionated nerds who are mostly motivated by loving a subject. My experience is that most people who think really deeply and care about what they do also care more that their work is prosocial.

        • Sl1mb0 2 days ago ago

          > care more that their work is prosocial

          These takes are always so funny to me. The whole reason we even have the internet is because the US government needed a way for parties to be able to communicate in the event of nuclear fallout. The benefits that a technology provides is almost always secondary to their applications in warfare. Researchers can claim to care that their work is pro-social, and they may genuinely believe it; but let's not kid ourselves that that is actually the case. The development of technology is simply due to the reality of nations being in a constant arms race against one another.

          Even funnier is that researchers (people who are supposed to be really smart) either ignore or are blissfully unaware of this fact. When you take that into consideration, the pro-social argument falls on its face, and you're left with the reality that they do this to satiate their ego.

          • compiler-guy 2 days ago ago

            Although the Rand corporation did contribute some ideas theoretically connected to nuclear survivability (packet switching in particular). All that work was pre-ARPAnet and don’t really motivate the design in that way.

            It was designed to handle partial breaks and disconnections though. Wikipedia quotes Charles Herzfeld, ARPA Director at the time as below. And has much ore discussion as to why this belief is false. https://en.wikipedia.org/wiki/ARPANET

            ====

            The ARPANET was not started to create a Command and Control System that would survive a nuclear attack, as many now claim. To build such a system was, clearly, a major military need, but it was not ARPA's mission to do this; in fact, we would have been severely criticized had we tried. Rather, the ARPANET came out of our frustration that there were only a limited number of large, powerful research computers in the country, and that many research investigators, who should have access to them, were geographically separated from them.[113]

          • tdb7893 2 days ago ago

            So researchers are going to be irrational and also often value other things more highly than prosociality but that doesn't really refute my point that they value it more highly than the average population.

            Also your example of a bad technology is something that allows people to still communicate in the event of nuclear war and that seems good! Not all technology related to war is bad (like basic communication or medical technologies) and also a huge amount of technology isn't for war. We've all worked in tech here, "The development of technology is simply due to the reality of nations being in a constant arms race against one another" just isn't true. I've at the very least developed new technologies meant to make rich assholes into slightly richer assholes. Technology is complex and motivations for it are equally so and won't fit into some trite saying.

            • Sl1mb0 2 days ago ago

              I never claimed any techology is good or bad; you also seem to be in agreement with me that technology used in warfare _can_ have "good" applications (I mentioned that the benefits are secondary to their applications in war, that doesn't sound like me saying there are no benefits).

              Lastly, the only point I was trying to make is that the argument that researchers do these things for "pro-social" causes is kind of a facade; the macro environment that incentivizes technological development *is* mostly due to government investment. Sure, the individuals working on it may all have different motivations, but they wouldn't be able to do so without large sums of money. The CIA [1] literally has a venture capital firm dedicated to the investing in the development of technology - do you really believe they are doing that to help people?

              - [1]: https://fortune.com/2025/07/29/in-q-tel-cia-venture-capital-...

      • cloverich 2 days ago ago

        This isnt unique to top AI researchers. Top talent has a long history of being averse to authoritarian/despotism at least in part because, by near definition, it must suppress truth. You cant build the future effectively with that approach.

      • wombatpm 2 days ago ago

        Because it is not Macrodata Refinement and you can’t stop them thinking off the clock.

      • janalsncm 2 days ago ago

        Aside from the Maslow’s hierarchy of needs points others are making, I believe it has something to do with the history of AI research.

        There is a big overlap between the “rationalist” and “effective altruist” crowds and some AI research ideas. At a minimum they come from the same philosophy: define an objective, and find methods to optimize that objective. For AI that’s minimizing loss functions with better and better models of the data. For EA, that’s allocating money in ways they think are expectation-maximizing.

        Note this doesn’t apply to everyone. Some people just want to make money.

      • refulgentis 2 days ago ago

        Maybe you’re reading “philosophical bent” as “armchair philosopher”, as in they are dabbling in a field unrelated to their profession and letting it drive their profession - worldview might have made it clearer?

        • lo_zamoyski 2 days ago ago

          Indeed. Philosophically, I have not been impressed by the more vocal people associated with the field. They may not be representative - I think most do it for the money and it being hip.

          “Worldview” is a better term, but people are generally blind to the worldview they’ve tacitly absorbed, including academics.

      • derektank 2 days ago ago

        Because a lot of them are academics that are doctors of philosophy

      • hermanzegerman 2 days ago ago

        Because they can afford it, they are very sought after.

        And smart people usually have moral convictions.

        I know for some people on this website it's hard to understand, but not everything in life is about $$$

        • 0x3f 2 days ago ago

          > And smart people usually have moral convictions.

          Are you sure you don't just like the moral convictions and so engage in trait bundling?

          Moral knowledge doesn't really exist. I mean you can have personal views on it, but the lack of falsifiability makes me suspect it wouldn't be well-correlated with intelligence.

          Smarter people can discuss more layered or chic moral theories as they relate to theoretical AI, maybe.

          • lo_zamoyski 2 days ago ago

            > Moral knowledge doesn't really exist.

            If that is the case, then why should you or anyone prefer to believe your claim that moral knowledge doesn’t exist over the contrary?

            • 0x3f 2 days ago ago

              Different kinds of claims, it's not self-referential

              • lo_zamoyski 2 days ago ago

                > Different kinds of claims

                How so?

                If I claim that one should prefer the claim "moral knowledge doesn't exist" over its contrary, then I am making a moral claim. That would make it self-refuting.

                There is no fact-value dichotomy.

                And one more thing...

                > the lack of falsifiability

                Is falsifiability falsifiable? If all credible claims must be falsifiable, then where does that leave us with the criterion of falsifiability (which is problematic even part from this particular case, as anyone who has done any serious reading in the philosophy of science knows).

        • lelanthran 2 days ago ago

          > And smart people usually have moral convictions.

          Dumb people have moral convictions. Smart people see the nuance.

        • siva7 2 days ago ago

          I'm smart and you can buy my morals. So what?

          • hermanzegerman 2 days ago ago

            Those people get paid so much anyway that they don't have to compromise their morals.

            I guess that's not the case for you and me

            • exe34 2 days ago ago

              so do oil and tobacco people, no?

          • refulgentis 2 days ago ago

            So what, indeed (not sure what you mean)

          • yoyohello13 2 days ago ago

            True, many smart people will gladly (or even begrudgingly) do evil for money. That's why there is so much suffering in the world, because of people like you.

            • 0x3f 2 days ago ago

              Is ad tech and the like really causing so much suffering? The government work, mass surveilance, killing people etc. doesn't actually pay that much, typically.

              • yoyohello13 2 days ago ago

                I think ad tech is probably the single most destructive technology of the new millennium. The shift toward "engagement at all costs" business strategies is basically the root cause of societies current political polarization. Engagement bait cultivates fear and rage in the populace to get clicks. We are now seeing the consequences of shoving ads that sow fear, anger, doubt and inadequacy into peoples faces 24/7. This doesn't even touch on the fact that mass surveillance is only possible because of the technologies forged by the Ad tech industry.

                • 0x3f 2 days ago ago

                  Well I'm not sure I entirely believe this myself, but it seems easy enough to argue that this is progress of a sort.

                  The West assumes pure democracy as the final form of government that we are all convergently evolving towards. But if this form of government or society is not robust to the kinds of things you're talking about, should it not suffer the consequences and be adapted or flushed for our long-term betterment?

                  It seems a bit like saying the French Revolution was the most destructive thing to happen in the history of France. Sure, in the short term. But it also paved the way for modern liberal democracy.

                  • yoyohello13 2 days ago ago

                    That’s fair enough. I wouldn’t say I’m happy about needing to live through interesting times, but if we make it out the other end maybe something better will come of it.

    • zeroCalories 2 days ago ago

      It's worse than that. Elon is a notoriously bad employer, and the only people that put up with him were the people that shared his vision. Pretty much the only people that will work for him now are second rate researchers and people that think gooner AI and racism is a worthwhile mission.

      • vessenes 2 days ago ago

        There's some texture here. Elon's enriched pretty much everybody who's ever worked for and invested with him. He makes money for people throughout his orgs. Many ex-employees have said to me: "incredible opportunity, made great money, worked insanely hard, once is plenty".

        • NeutralCrane 2 days ago ago

          My ex-Twitter employee coworkers beg to differ. They made plenty of money before Elon came around. Once he was in the company, one of them actually hired a personal attorney to confirm that he wasn’t going to be burned by the things Musk was asking him to do, before he finally decided it wasn’t worth it to work there anymore and left.

          • tptacek 2 days ago ago

            I think Musk is odious but I think there's a lot of complicating evidence to the story of what happened at Twitter. And: very smart people, like Dan Luu, were complaining about their culture long before Musk arrived.

            • keeda 2 days ago ago

              Is there anything from Dan Luu you could point me offhand at about Twitter's culture? The only thing I recall was a blog about technical issues but that didn't seem to have much bearing on the culture.

              My understanding is Twitter always had cultural issues but it was not very different from other tech companies of the time, and what most of us would consider "directionally correct." I have it on pretty good authority from a very senior engineer who left before Elon took over (so no grudges other than, you know, "because Elon") that a lot of the things he said publicly about Twitter's technology was highly misleading or downright false. Like, IIRC, something about them not having CI/CD. Total lie.

              • tptacek 2 days ago ago

                I have no idea what Musk did or didn't say. I don't pay attention to him; I think he's odious. But he did cut more than half the entire workforce and the service works as well as it ever has, which is pretty damning. I'm not willing tie myself into the pretzel required to explain how antebellum Twitter was well-managed given that.

                There's some fraction of that workforce that supported projects intended to make Twitter a viable standalone business, which it probably no longer is. Backoffice / line of business projects intended to support advertisers, that sort of thing. But I don't think you can explain a RIF of Twitter's scale that way.

                (I'll try to dig up the Luu post I'm thinking of.)

                • tasty_freeze 2 days ago ago

                  Many of the workforce he laid off were content moderators -- I've read it was a serious effort with a large number of people doing thankless work. There is now way more anti-Semitic content on X, more racial insults, etc.

                  • mancerayder 2 days ago ago

                    Side point, but you'll find plenty of anti semitism on HN in the Israel articles that have many comments - it comes in the form of conspiracy comments that people reply with, that use mossad, pedophilia, Netanyahu and the US in the same sentence. Any replies calling it out become greyed from downvotes.

                    It's just not viewed as anti-Semitism, probably in the same way that the posts on X aren't viewed as far-right or extremist.

                    Extremists usually don't experience their views as extreme, but as rational and important.

                  • tptacek 2 days ago ago

                    Come on. No they weren't.

                    • keeda 2 days ago ago

                      Well not just content moderators, but he gutted Trust and Safety and the content moderation function of the company, which is surprisingly larger than the moderators themselves. Having worked peripherally with similar departments that had multiple teams, even though a lot of it comes down to human moderators, there is a ton of technology around the moderators, and even more keeping the content getting to them in the first place.

                      Firstly, this is a red queen’s race because like security, new types of unwanted content, threats and risks keep arising as the information (and misinformation) landscape and overall zeitgeist keeps shifting. The work is never done and the best that can be done is to build platforms and frameworks to streamline it. There is also a lot of fractal complexity everywhere.

                      E.g. there’s a ton of technology needed to support the moderators themselves. Infrastructure like review queues to enable them to rapidly handle content classified by type, risk level and priority. Like Jira but not Jira because it can’t scale to the number of queues and issues involved here. So you basically re-implement and maintain a Greenspun’s 10th rule version of Jira.

                      There is still a huge amount of invisible complexity beyond that. For instance, you need to manage how much of a certain type of content gets exposed to a given moderator because some types (CSAM, gore) lead to burnout and PTSD. You also need to blur these things.

                      (Also the same type of content often gets reshared, so you need things like reverse image search to auto-filter that, because running the whole pipeline each time is expensive.)

                      This of course necessitates a ton of machine learning. Because risks keep shifting, and (pre-LLMs) each type requires the entire ML lifecycle and related infra: collecting and cleaning data, building classifiers for them, deploying them, seeing how well they work, and tuning them, and then replacing them when the bad actors eventually adapt to newer means.

                      ML is also of course needed for bots, spam and scams, which keep evolving. Entirely different techniques here though.

                      Then there is all the infra needed to handle the fallout of moderation. Counting strikes against users, dealing with their complaints, handling escalations, each case with a long history of interactions that needs to be collated for quick evaluation. Easier said than done because of course the backend is not an RDBMS but a bunch of MongoDB-alikes because webscale.

                      And all of this is a signal for the ranking used for feed, the main product, which keeps evolving, so a ton of “fire and motion” happening there. You introduce a new feature in the feed? You just introduced a dozen different abuse vectors.

                      Then there are policy makers and the technology needed to support them. Policy is always shifting as the landscape is shifting. This also includes dealing with regulations, which are also often shifting and require ways to deal with legal requirements and various legal systems like NCMEC. And this varies by jurisdiction. Like not just by countries, sometimes even by states.

                      (Funny story about NCMEC – it has an API to report CSAM, but I could not find it. So I googled something like “child porn API” and got a blank results page. Pretty sure I’m now on a list somewhere.)

                      I could go on and on. And I wasn’t even working in this area, just supporting these teams! Admittedly in our case I'd put the relevant headcount in the hundreds and not thousands, but our scale was also very different. For a company that is ENTIRELY about user-generated content at massive scale, up to national-level events like Arab Spring -- even if there was a lot of bloat -- I would not be surprised to learn this function was the majority of the workforce.

                      And Elon killed pretty much all of this. And, well, we see the results everyday.

                      • tptacek 2 days ago ago

                        I get that he shredded trust & safety, and that Twitter got way worse afterwards in that regard. But he fired more than half the workforce, and they were not mostly T&S people.

                        • keeda a day ago ago

                          I dunno, most reports from the time (and a quick Google AI overview just now) mentioned the cuts largely focused on T&S and moderation teams. Even the ML teams he cut reprotedly were working more on safety and integrity issues. Many who worked on "woke" issues were also cut, but the line between T&S and "woke" gets blurry quickly.

                          To be fair, this could be due to the bias in reporting, as media outlets may have had incentives for over-emphasizing the T&S angle.

                          I do not deny there was bloat. There was bloat in most tech firms at the time. But I don't think it was 80% bloat. My post was to explain how, even if T&S / moderation seems like a small function, it can require an unexpectedly large headcount -- probably even more for a pure-UGC company like Twitter -- and so could realistically account for the bulk of the cuts.

                          • tptacek a day ago ago

                            Come on. Zillions of developers have complained about getting RIF'd. It's not a mystery. I don't like Musk's Twitter. I don't like Musk. But pretending isn't getting us anywhere.

                            • keeda a day ago ago

                              I'm not sure I follow. Assuming you mean the zillions of developers that got RIF'd at Twitter, do we know how many were bloat versus working on the T&S and related functions? I tend to believe the latter based on media reports and because that has clearly had an impact on the product.

                              • tptacek a day ago ago

                                It's OK if our premises are too far apart to hash this out. No, I don't think shredding T&S is one of the principal components of the giant Twitter RIF. Yes, T&S got killed; yes, that's bad. No, you can't explain how Musk manages to keep Twitter technically functioning as well as it does by pointing to T&S.

                                • keeda 8 hours ago ago

                                  Totally fair. That said, I'll leave a mention of a plausible theory I have for how Elon -- and the rest of the industry have been managing to keep things running with all these layoffs:

                                  https://news.ycombinator.com/item?id=45192092

                                  Again, I'll admit Twitter and all other companies had bloat. But based on these industry-wide reports about record levels of burnout, inside knowledge of at least one company that I thought had unjustified layoffs, and a large number of conversations I've been having with connections across the tech industry, I think these layoffs have long gone far beyond the bloat.

        • KaiserPro 2 days ago ago

          I don't really think thats true.

          The deal with tesla is that there is a relatively small employer pool, so you can be fairly bad employer but still get good outcomes. The same with spaceX. Sure early tesla had some stories about it being fun, but there was/is a darkside.

          The issue with xAI is that researchers have a whole bunch of other employers to choose from. Even at meta, where it used to be fairly nice for researchers, the pressure of "delivering" every 6 months lead to bad outcomes. Having someone single you out for what ever reason the boss had a bad day, is not how good research gets done.

          We have seen (A few of my friends were at twitter when it was taken over) that Musk has a somewhat unusual approach to managing staff (ie camping at work). Some researcher love that, assuming that they have peace to research, and are listened to. But a lot don't.

          • vessenes 2 days ago ago

            I think we are saying the same thing. He builds trillion dollar companies that are labor efficient; nobody said they are good places to work.

        • Freedom2 2 days ago ago

          Many ex-employees have said to me that working for Elon did not enrich them at all, either financially or professionally.

        • hermanzegerman 2 days ago ago

          He's a notorious cheapskate and Tesla is known for firing people shortly before their stock options vest

        • rconti 2 days ago ago

          What about all the ones who are suing him for shortchanging them?

        • jamespo 2 days ago ago

          There's probably a lot of survivor bias going on there

          • vessenes 2 days ago ago

            Undoubtedly. With 2.5T in value between tsla and sx that’s a lot of value for survivors.

            • sumeno 2 days ago ago

              What % of that is owned by employees that aren't named Elon Musk?

        • raw_anon_1111 2 days ago ago

          Ask the people at Twitter..

          • cladopa 2 days ago ago

            You mean the 80% of the workforce that was fired and the company continued running just fine?

            Usually just firing 3 to 5% of any company workers have terrible consequences for the company that does it.

            It does not speak so well about the workers.

            • mattbillenstein 2 days ago ago

              He also cut 80% of the traffic... And the fact that it kept running with him willy nilly pulling network cables is a credit to the work they did to make it resilient to failure.

              • iknowstuff 2 days ago ago

                Source on pre/post traffic numbers?

            • keeda 2 days ago ago

              I don't understand this take. Do people think engineers go in to work to turn handcranks to keep the machines running? It's actually a credit to the automation built by the engineers he fired that it kept running!

              At the time I joked that like Chaos Monkey, we should have an "Elon Monkey" to "fire" arbitrary people by sending them on mandatory vacations with no connectivity to see what falls over.

              • spullara 2 days ago ago

                the people that built the infrastructure that runs twitter left before he showed up. most of it was written by a half dozen people that left around 2016.

            • watwut 2 days ago ago

              It was significantly worst, could not keep ads, became overrun by bots. The quality went down significantly. And earnings too.

          • JumpCrisscross 2 days ago ago

            > Ask the people at Twitter

            The ones with stock options in, now, SpaceX?

            • sroussey 2 days ago ago

              Poor SpaceX employees whose options got diluted by Twitter. :/

            • raw_anon_1111 2 days ago ago

              Stock options aren’t magic. I bet you that the remaining Twitter employees won’t see a higher comp than equivalent employees at BigTech companies between their cash + RSUs when SpaceX IPOs.

              Aren’t employees also subject to a lock out period where they still can’t sell their stock until $x number of months after an IPO unlike employees of public companies that can sell as soon as they vest?

              Honest question, I’ve worked for public $BigTech but haven’t been at a company pre IPO

              • htrp 2 days ago ago

                180 day lockup period is standard

            • rconti 2 days ago ago

              No, the ones suing his ass.

        • Zigurd 2 days ago ago

          > Elon's enriched pretty much everybody who's ever worked for and invested with him.

          I'd wager you were saying the same thing about bitcoin until last year.

          • mediaman 2 days ago ago

            I'm unclear what statement this is trying to make.

            Is it meant to draw equivalence between crypto and Tesla/SpaceX? That each has roughly similar (i.e., low) value to humanity, or value as businesses?

            Is it that the metric of whether a person makes others money is invalid?

            The comment seems coy, possibly to avoid making any claim at all, but it must not be that because that wouldn't be very sporting.

            • iamacyborg 2 days ago ago

              He’s saying that it’s easy to say good things when the market’s on an upswing.

              • Zigurd 2 days ago ago

                I'm also saying that almost all of TSLA's price is roughly the same as all of bitcoin's price, which is to say vibes-based. It's a fandom. A cult.

      • LZ_Khan 2 days ago ago

        After seeing the type of people he hired for doge.. yikes.

        • hooch 2 days ago ago

          Was doge ever anything more than a "get root, grab the data, and run" operation?

          • boc 2 days ago ago

            Maybe, but destroying USAID was an unforgivable sin. Short of nukes, rapidly turning off direct medical and food aid that people in critical need have relied on for years is objectively one of the fastest way to kill millions of people.

          • pstuart 2 days ago ago

            Don't forget the destruction of USAID and countless projects that had the word "diversity" in its work.

          • joquarky 2 days ago ago

            It's pretty obvious now.

            • yoyohello13 2 days ago ago

              It was obvious at the time too.

          • markdown 2 days ago ago

            I think more important than that was shutting down all investigations into Musk's companies.

        • GeorgeTirebiter 2 days ago ago

          Karparthy worked for Elon for, what, 5 years? How did he do it, if Elon is Ivan the Terrible?

          • cmorgan31 2 days ago ago

            Mate, wouldn’t it make sense that these rules are applied via hierarchy? If Elon respects Karparthy he almost certainly gave him a longer leash and Karparthy’s output was strong enough to not warrant intervention. It’s clear he did not want to stay long term so I’m not sure this is a strong line of thinking.

            • GeorgeTirebiter 2 days ago ago

              It's possible. I don't know. My tone comes off as support Elon, and I do not, at all. I've seen first-hand almost all of these tactics while I was at <Elon Company>. I'm observing that some people seem to do OK at Elon's companies, and for many years, and never seem to get the boot or be abused in other ways. Therefore, Elon is probably not quite as bad a manager as he is made out to be. This is all I am saying. Since I have firsthand knowledge, I believe my opinion has value. Those that disagree? Show me your Source of Truth. Thank you.

              • cmorgan31 2 days ago ago

                I don’t believe Elon is even remotely related to a people manager. He’s a stakeholder and operator which require different skill sets. He finds folks who will manage to o bring the empathy he tends to lose in his pursuit of his next project. I believe your evidence may be anecdotally valuable but let’s be clear about the dynamic of a founder/ceo.

          • jazzpush2 2 days ago ago

            Karpathy makes great educational content. It's not clear what industry (or academic) research he did even now, five years later.

      • ai_critic 2 days ago ago

        Gooning and racism have been a cornerstone of humanity since we descended from the trees, for better or worse.

      • vibeprofessor 2 days ago ago

        [dead]

    • tim-tday 2 days ago ago

      Even people who are purely money motivated have an instinct for self-preservation.

    • GardenLetter27 2 days ago ago

      The main issue is the requirement for relocation tbh.

    • PunchTornado 2 days ago ago

      Sorry, but what is the philosophical niche of openai really? Obtain money at all cost? No red lined when using your modele in war? Work for scam altman?

    • general_reveal 2 days ago ago

      What do you mean “philosophical”? Ethics and morals are not required, Elon can get whatever type of asshole he needs. Something else is up.

    • oceanplexian 2 days ago ago

      > But frontier AI research is a field with a lot of top talent who have strong philosophical motivation for their work

      The "top researchers" in AI are Chinese. And I am skeptical that they even remotely have the philosophical or political alignment you are attempting to project on to them. Neither is a letter published by a few disgruntled employees of a San Francisco based company any kind of evidence or form of consensus.

      • TheEzEzz 2 days ago ago

        > The "top researchers" in AI are Chinese. And I am skeptical that they even remotely have the philosophical or political alignment you are attempting to project on to them.

        I assure you that Chinese researchers have a diversity of philosophical and political alignment, much the same as other researchers. I also assure you that top researchers as a whole are not all Chinese, though the ones that are that I know are all very thoughtful.

      • squidbeak 2 days ago ago

        > The "top researchers" in AI are Chinese. And I am skeptical that they have even remotely the philosophical or political alignment you are attempting to project on to them.

        What an ugly trope. Idealism motivates Chinese workers just as often as any other nationality.

        • oceanplexian 2 days ago ago

          Idealism of what? That the government shouldn't use AI for surveillance or the military?

          You really think the average Chinese worker thinks their government should stop working on AI because of liberal western values or something? This is nothing short of delusional.

          • dminik 2 days ago ago

            I have my doubts that top Chinese AI researchers want to work for an AI company with direct tires to the white house and zero morals. Not for any great ethical concerns mind you. Simply because the US is a geological rival to China.

  • bearjaws 2 days ago ago

    Feel like the canary was when Grokpedia became a project.

    Giant waste of time while Anthropic/OAI keep surging forward.

    I also keep hearing this narrative that Twitter is a good data source, but I cannot imagine it's a valuable dataset. Sure keeping up with realtime topics can be useful, but I am not sure how much of a product that is.

    • paulbjensen 2 days ago ago

      The Twitter social graph was an amazing data asset. I worked at a consumer insights firm and the data on followers/followings was quite powerful.

      Using a custom taxonomy of things (celebrities, influencers, magazines, brands, tv shows, films, games, all kinds of things), we could identify groups of people who liked certain things, and when you looked at what those things were, it gave you a way of understanding who those people were.

      With that data, you could work out:

      - What celebrities/influencers to use in marketing campaigns - Where to advertise, and on which tv/radio channels - What potential brands to collaborate with to expand your customer base - What tone of voice to use in your advertising - In some cases, we educated clients about who their actual customers were, better than they understood themselves.

      One scenario, we built a social media feed based on the things that a group of customers following a well-known Deodorant brand in the UK would see.

      When we presented that to the client, they said “Why are there so many women in bikinis in this feed?”

      The brand had repositioned themselves to a male-grooming focussed target market, but had failed to realise that their existing customer base were the ones that had been looking at their TV adverts of women on beaches chasing a man who happened to spray their Deodorant on them. Their advertising from the past had been very effective.

      That was the power of Twitter’s data, and it is an absolute shame that Twitter went the way that it did. Mark Zuckerberg once said that Twitter was like “watching a clown car driven into a gold mine”.

      I’m pretty sure he must be delighted with how things have panned out since.

      • BLKNSLVR 2 days ago ago

        That entire description sounds worthless to any positive direction of humanity. Therefore probably rapaciously profitable

        Very sad face.

      • rchaud 2 days ago ago

        In other words, using flash-in-the-pan data to build an advertising goldmine.

      • mbs159 2 days ago ago

        Damn, this only validates the use of ad-blockers / sponsor-blockers even more

      • smcin 2 days ago ago

        That Zuckerberg quote was published in 2013 and supposedly was made a year or more before. Was it about when Dick Costolo was CEO (2010-2012)?

      • johnisgood 2 days ago ago

        This reads very dystopian. You are not optimizing to understand people, you are optimizing to weaponize that understanding against them.

        When you know what someone will buy based on exploiting their unconscious preferences, and you are paid to increase sales, you will do it. Especially if your competitors are doing it too.

        And this happens at scale, invisibly. People never see the manipulation.

        In any case, it is not useful for most people. It is useful for the people doing the deceiving.

        • etchalon 2 days ago ago

          It's marketing. That's how marketing works.

          • rhubarbtree 2 days ago ago

            And it’s far more important in capitalism than your products.

            With the advent of AI, startups become solely about marketing, sales, and defensibility.

            So most of the capitalist system will become of this nature. Doesn’t seem like such a good system, and inevitably unsustainable.

        • caaqil 2 days ago ago

          The tech is interesting and useful, no need for the scary moral framing.

          The original application of the entire field of data science or ML is/was actually based on this paradigm of finding "unconscious preferences" (your words) and hidden patterns. How one chooses to deploy the tech should be judged on its own.

          On the current trajectory of tool/data abuse where Palantir et al. are leading the way, this is very low on the sinister scale.

          • johnisgood 2 days ago ago

            I am not disputing that the tech is interesting. My point is about how it is being applied. The examples above are not about understanding people, they are about exploiting their latent preferences (before: "unconscious preference") for persuasion at scale.

            Attempting to normalize that by saying "Palantir is worse" does not make it any less manipulative and sinister.

            And to be more on topic, Twitter's value as dataset is overstated. Hardly the panacea people make it out to be.

          • hananova 2 days ago ago

            To not frame the amorality and negative effects centrally and primarily is to be dishonest. There is absolutely not a single person whose wage doesn't rely on not seeing it, that doesn't see that that entire branch of tech has strictly negative value to society.

            But of course, line must go up, and it's not you personally being negatively affected, so it doesn't matter.

      • gwern 2 days ago ago

        It's definitely very valuable, but for what AI model? How does any of that lead to AGI, or even just a good coding agent?

        • applfanboysbgon 2 days ago ago

          It doesn't need to lead to AGI or a good coding agent. Some of the only people who are actually profitable in the LLM industry are the people making actual chatbots. There are several bootstrapped startups that run open-weight models with a $10 or $20 monthly sub and make millions in profit off of inference from people just talking to the things, usually for character roleplay / "AI boyfriend/girlfriend" stuff etc. Some of them even took those profits and invested it into training their own bespoke models from scratch, usually on the smaller side although finetunes/retrains of Llama 70b, GLM, and Deepseek 670b have also been done. Grok could probably be profitable if it targeted this space, as the most "intelligent" conversational/uncensored model.

          This is already presupposing that profit even matters, though. Musk already burned some $50 billion dollars to control messaging on political discourse with his acquisition of Twitter. It was not about money, but power. After you already have infinite money, the only thing left to spend it on is acquiring more power, which is achieved through influencing politics. LLMs represent a potentially even better propaganda tool than social media platforms. They give you unprecedented access to people's thoughts that they would probably not share online otherwise, and they allow you to more subtly influence people with deeply-personalised narratives.

        • KaiserPro 2 days ago ago

          > but for what AI model?

          Sentiment analysis. Working out what words lead to what outcomes, and then being able to predict on new data is super useful.

          For coding or "AGI" no, its not useful. For building a text based (possibly image based) recategorisation system top class.

      • Gud 2 days ago ago

        Ok, in that case I am glad that Elon fucked it up.

      • alex1138 2 days ago ago

        As an aside that quote from MZ does bother me. There's more to making a web-scale human rights respecting (because it has to, it's the internet, social media needs guidelines) than just making money (which Zuck doesn't seem to care much about anyway if he's sinking apparently billions into metaverse while having no account support)

        Of course he would only see it through the lens of cash. I have no idea how profitable Twitter was under Dorsey but it felt the spirit of the company at first was relatively neutral, it was a tool, it was what Jack came up with

        Zuck replaced people's email addresses[1], the feed has been wildly unchronological for years. Fix some of those problems wrt. lack of user respect and maybe you can make statements like "all else being equal, clown car goal mine". Or was it "dumb fucks"[2]?

        [1] https://news.ycombinator.com/item?id=4151433 [2] https://news.ycombinator.com/item?id=1692122

      • 2 days ago ago
        [deleted]
      • cyanydeez 2 days ago ago

        It _was_ a great asset, however, just like models need proper data, as soon as musk removed the clamps on valuable social signals, well, he basically took a dump where he intended to eat.

        • ohyoutravel 2 days ago ago

          They did say was, and did say Twitter, which existed in the past.

    • brokencode 2 days ago ago

      It’s pretty telling that Elon had to have Grok rewrite Wikipedia because the truth was too woke for him. No idea how anybody can ever take Grok seriously.

      • freehorse 2 days ago ago

        Many projects in his companies seem to be more and more Musk's vanity projects than ideas/products one can take seriously. This is also how tesla ended up with a huge cybertruck stock that nobody wants to buy and thus had to be bought by his other companies. And it is becoming worse and worse, especially ever since he bought twitter and sped up his twitting rates.

        • dmarcos 2 days ago ago

          FWIW it looks there’s now a demand surge with the introduction of the new cheap cybertruck variant. delivery dates pushed out to the fall of 2026.

          • robrain 2 days ago ago

            That was an artificial boost created by setting a time-limit for a low price. There were ten days to buy at the price, then they put it back up. [1]

            [1] https://electrek.co/2026/03/01/tesla-cybertruck-awd-price-in...

            EDIT: grammar

            • parineum 2 days ago ago

              What's an artificial boost? Sounds like you're describing a sale.

              • hananova 2 days ago ago

                Sales are artificial boosts yes. The difference is in the connotation. A sale is given for something that people generally would buy anyway, but now more people will. An artificial boost is given to stuff nobody wants, but at a lower price can be convinced to buy.

                Or in other words, sales raise $high_number to $higher_number while artificial boosts raise $essentially_zero to $acceptable_number.

                • dmarcos 2 days ago ago

                  Your claim is that people that bought the cybertruck at a lower price don’t actually want it?

                  • sigmarule 2 days ago ago

                    I believe the claim is that the demand side did not change, the supply side did, as in sales != demand.

                    • dmarcos 2 days ago ago

                      Just quoting the above

                      “An artificial boost is given to stuff nobody wants, but at a lower price can be convinced to buy”

                      So people spent 60k on a cybertruck that they didn’t want? Is that the claim?

                      • pas 2 days ago ago

                        the claim is that it moved sales forward in time, but it'll have a corresponding dip in sales later, whereas a good sales campaign increases total volume (virtually no dip, brings in new customers, etc)

                • parineum 2 days ago ago

                  > artificial boost is given to stuff nobody wants, but at a lower price can be convinced to buy.

                  People do want it, clearly, but it's too expensive for them.

                  Sales don't make people want things they otherwise don't.

                  • bdangubic 2 days ago ago

                    > Sales don't make people want things they otherwise don't.

                    That is exactly what sales do. most sales are made sellings things to people they don’t want, until sales does what sales does

                    • dmarcos 2 days ago ago

                      So people spent 60k on a cybertruck they don’t want? Do you believe that?

                      • bdangubic 2 days ago ago

                        look around your house and see how much shit you got that you really want(ed). great salesman (and elon is the best in the history of the civilization) will sell you shit you never thought you wanted :)

                        • dmarcos 2 days ago ago

                          The motivation to buy something is always because you want it. That a product doesn’t meet your needs or expectations later is a different story. What’s your evidence to claim that people spending 60k in a cybertruck don’t want it? What’s your evidence to make a similar claim or the opposite for any other purchase? Without evidence it feels you are making baseless claims about peoples motivations.

                          • bdangubic 2 days ago ago

                            > The motivation to buy something is always because you want it

                            salesman make you want stuff you didn’t know you want it but now you do. entire world economy is built on this

                            • dmarcos a day ago ago

                              Is it still your claim that people spending 60k on Cybertruck don’t want it? How do you know? Given the lack evidence feels like motivated thinking. You don’t like Elon and can’t accept that tons of people actually like him and his products.

                  • sillyfluke 2 days ago ago

                    literally almost everything I have bought on sale is something I wasn't looking to buy at that moment in time.

                    • parineum a day ago ago

                      How many of those things cost more than 10,000 dollars?

                • RobRivera 2 days ago ago

                  [X] doubt

          • NewJazz 2 days ago ago

            Look up what their production targets were and compare that to their sales. A small temporary demand surge isn't going to be enough to chew through their current inventory, let alone keep the production lines busy.

          • MPSimmons 2 days ago ago

            A push on delivery dates is as likely to mean production issues as it is an influx of interest.

        • annexrichmond 2 days ago ago

          Drivel. They’re selling just as well as Rivians.

          • izacus 2 days ago ago

            They're not even selling as well as Volkswagens here anymore.

        • 2 days ago ago
      • squarefoot 2 days ago ago

        Probably next generations of kids being fed PragerU studying material will. Something tells me we didn't see a fraction of what's going to happen in the decades to come.

      • Timon3 2 days ago ago

        I take Grokipedia very seriously as a threat to society. Sure, they're happy if people read it and fall for - but the primary goal is not to convince humans, but to influence search results of current models & to poison the training data of future models. ChatGPT (and most likely other models/providers too) is already using Grokipedia as a source, so unless you're aware of the possibility and always careful, you might be served Musks newest culture war ideas without ever being the wiser.

        It's not enough that everyone on Twitter is forced to read his thoughts, he's trying to make sure his influence reaches everyone else too.

        • danabramov 2 days ago ago

          I've seen Claude pick it up too. It's disconcerting.

      • alex1138 2 days ago ago

        I can both not like Elon and also think Wikipedia is also very captured on some things

        • ryandrake 2 days ago ago

          Are there actual good examples showing errors of fact on Wikipedia that are verifiably incorrect, that demonstrate how it is "captured"?

          • calqacon 2 days ago ago

            How about Gabrowski et al.: "Wikipedia’s Intentional Distortion of the History of the Holocaust", about the outsize influence of certain coordinated Polish editors on the Wikipedia articles about Poland and the Holocaust?

            https://www.tandfonline.com/doi/epdf/10.1080/25785648.2023.2...

            Quote from the conclusion:

            > This essay has shown that in the last decade, a handful of editors have been steering Wikipedia’s narrative on Holocaust history away from sound, evidence-driven research, toward a skewed version of events touted by right-wing Polish groups. Wikipedia’s articles on Jewish topics, especially on Polish–Jewish history before, during, and after World War II, contain and bolster harmful stereotypes and fallacies. Our study provides numerous examples, but many more exist. We have shown how the distortionist editors add false content and use unreliable sources or misrepresent legitimate ones.

            For a more recent paper, "Disinformation as a tool for digital political activism: Croatian Wikipedia and the case for critical information literacy" by Car et al. says that:

            > The Hr.WP [Croatian Wikipedia] case exemplifies disinformation not only as content manipulation, but also as process manipulation weaponising neutrality and verifiability policies to suppress dissent and enforce a single ideological position.

            https://doi.org/10.1108/JD-01-2025-0020

            • ethbr1 2 days ago ago

              If the debate here is that sustained ethno-political campaigns are slightly shifting Wikipedia over time in a way that requires an academic paper to detect...

              Vs.

              What Elon is doing...

              Then we're not even comparing fruits to fruits.

          • servo_sausage 2 days ago ago

            I find it more surprising that the common understanding has shifted away from "wikis are crap for anything new or political".

            As soon as there is a plausible agenda for selecting a narrative the way Wikipedia works we should be sceptical.

            For recent examples, everything to do with Biden and family, and Gamergate. These pages are still full of discussion; and what's written is more ideological than factual. You can follow these pages to see how an in-group selects a narrative.

            And these topics are not nearly as controversial as race, feminism, or transgender topics.

            • ryandrake 2 days ago ago

              OK, is there a specific example on either the Biden or Gamergate page that is factually incorrect? Or are you saying the entire pages are false?

              • servo_sausage 2 days ago ago

                My point is more that the history of those pages is a good example of how Wikipedia works for controversial topics; it's not really a process of becoming more correct as better sources are found and argued about like it is on more neutral pages, instead it's an in group deciding what to represent, collecting their preferred opinion pieces. And this changes over time, getting no closer to neutrality within the same articles history.

                You can write an equivalent article starting with "Gamergate was a movement reacting to the improper collusion between game developers and journalists" and find just as many sources, but the current article wants to promote the idea that it was a harrassment campaign first.

                • datsci_est_2015 2 days ago ago

                  It was also pretty credibly a psyop orchestrated by Steve Bannon and Jeffrey Epstein, but that’s probably better served in history books and biographies rather than an encyclopedia.

              • scarmig 2 days ago ago

                Wiki's Gamergate opening paragraph:

                > Gamergate or GamerGate (GG) was a loosely organized misogynistic online harassment campaign motivated by a right-wing backlash against feminism, diversity, and progressivism in video game culture. It was conducted using the hashtag "#Gamergate" primarily in 2014 and 2015. Gamergate targeted women in the video game industry, most notably feminist media critic Anita Sarkeesian and video game developers Zoë Quinn and Brianna Wu.

                Grokipedia's:

                > Gamergate was a grassroots online movement that emerged in August 2014, primarily focused on exposing conflicts of interest and lack of transparency in video game journalism, initiated by a blog post detailing the romantic involvement of indie developer Zoë Quinn with journalists who covered her work without disclosure. The controversy began when Eron Gjoni, Quinn's ex-boyfriend, published "The Zoe Post," accusing her of infidelity with multiple individuals, including Kotaku journalist Nathan Grayson, whose article on Quinn's game Depression Quest omitted any mention of their prior personal contact. This revelation highlighted broader patterns of undisclosed relationships and coordinated industry practices, such as private mailing lists among journalists, fueling demands for ethical reforms like mandatory disclosure policies.

                I don't care about "Gamergate" and never use Grokipedia, but Wiki definitely has a stronger slant: it's as if an article about Black Lives Matter started with a statement that it was a campaign meant to scam people to pay for mansions for leadership.

                • brendoelfrendo 2 days ago ago

                  Wikipedia's assessment is more accurate. Wikipedia does go on in its second paragraph to explain the context of the start of the campaign, including "The Zoe Post" and the accusations of conflict of interest. But the broader impact of Gamergate was as a misogynistic online harassment campaign, and Wikipedia is correct to make that the central part of its summary. Just because Grokipedia is more reluctant to state a conclusion does not make it less biased.

                • yongjik 2 days ago ago

                  Well, I'm naively assuming Grokipedia is being sympathetic to the cause(?) of Gamergate, but if the best thing they could lead the article was essentially "It all started when someone got mad at his ex-girlfriend and her many other boyfriends and wrote something that went viral" ...

                  ... it does sound like an online harassment campaign.

                  • baublet 2 days ago ago

                    It was. In hindsight it signaled the beginning of the mass weaponization of the internet via social media. It also was NOT grassroots lol. It was very specifically and intentionally enflamed and groomed and funded by people like Steve Bannon and his good buddy Jeffrey Epstein. It wouldn’t have such a big Wikipedia article without them.

                • vharuck a day ago ago

                  As somebody who supported GG for the first month or so, Wikipedia has the better intro from where things stand in 2026. GG started by piggybacking on general distrust of gaming journalists, but was quickly consumed by misogyny.

                  An article doesn't avoid bias by avoiding unpleasant facts.

              • andoando 2 days ago ago

                Which facts are represented is equally important as being factual though.

                Brian hit Jim can be a fact. But if you emit "Jim murdered Brians whole family", its a disortation of truth

                • bdangubic 2 days ago ago

                  specific examples other than ficticious Jim&Brian?

                  • andoando 2 days ago ago

                    I haven't read wikipedia in a long time so I can't answer your question, I am just pointing out that just saying "the facts are correct" is not enough to say there is no bias on wikipedia

          • AuryGlenz 2 days ago ago

            [flagged]

            • JumpCrisscross 2 days ago ago

              The Minnesota Transracial Adoption Study was methodologically flawed. “Children with two black parents were significantly older at adoption, had been in the adoptive home a shorter time, and had experienced a greater number of preadoption placements.”

              Reframed, the study seemed to find (a) black kids are adopted less readily and (b) the longer a kid spends in the foster system, the lower their IQ at 17. (There is also limited controlling for epigenetic factors because we didn’t understand those well in the 1970s and 80s.)

              Based on how new human cognition is, and genetically similar human races are, it would be somewhat groundbreaking to find an emergent complex trait like IQ to map to social constructs like race, particularly ones as broad as American white and black. (There is more genetic diversity in single African tribes than in some small European countries. And American whites and blacks are all complex hybridized social categories.)

              [1] https://en.wikipedia.org/wiki/Minnesota_Transracial_Adoption...

              • AuryGlenz 2 days ago ago

                [flagged]

                • tptacek 2 days ago ago

                  What? No you can't.

                  And: it remains perfectly OK to study racial differences in IQ. It's an actively studied topic. In fact, it's studied by at least three major scientific fields (quantitative psychology, behavioral genetics, and molecular genetics). The idea that you can't is a cringe online racist canard borne out of the fact that the studies aren't coming out the way they want them to.

                  • AuryGlenz 2 days ago ago

                    Does it now? Noah Carl would disagree. He was a researcher at Cambridge University that was dismissed after an open letter signed by over 1,400 academics and students accusing him of "racist pseudoscience" for merely arguing that race-IQ research should not be off-limits.

                    James Flynn (of the Flynn effect) has also publicly stated that grants for research clarifying genetic vs. environmental causes of IQ gaps weren't approved because of university fears of public furor.

                    • tptacek 2 days ago ago

                      You're trying to axiomatically win an argument that is already settled empirically. It won't work. You can just read the papers. My point being: the papers exist, and more are published every year. Once you acknowledge that, your argument is dead. Literally no matter what the papers say. Don't make dumb arguments.

                      Noah Carl has a sociology doctorate. He doesn't work in the fields that study this; he just tries to launder his way into them.

                      Flynn is, famously, a race/IQ skeptic.

                    • akerl_ 2 days ago ago

                      https://medium.com/@racescienceopenletter/open-letter-no-to-...

                      https://www.theguardian.com/education/2019/may/01/cambridge-...

                      > for merely arguing that race-IQ research should not be off-limits.

                      Help me connect the dots here.

            • AlotOfReading 2 days ago ago

              It seems like the root of your statement is with the existence of "race" as a purely biological classification. Wikipedia correctly notes the consensus position that race is a social construct [0] that's difficult to use accurately when discussing IQ. Grok makes the implicit and incorrect assumption that genetic factors = race, among other issues.

              [0] https://www.genome.gov/genetics-glossary/Race

              • darkwater 2 days ago ago

                I wonder how much longer that link will stay up with the current administration...

              • AuryGlenz 2 days ago ago

                Ok, change it to "what we call race as a proxy for general geographic locations that people's ancestors come from."

                Which is what we all mean by race, anyways.

                • AlotOfReading 2 days ago ago

                  That's not what your previous post was talking about. But if you insist, at least make your point clear. "African Americans" and "Africans" are wildly different genetic populations that get subsumed under the same "Black" racial category in the US. Which one were you talking about?

                  The latter is more genetically diverse than any other human population by an incredible margin. Making generalized statements about them is impossible (including this one). As for African American populations, ancestry estimates of how closely related they are to African populations vary massively for each individual. Many people are much closer to "white" populations than any African population, due to the history of African Americans in North America. If you really mean race as a geographic proxy, the "black" label is simply confusing what you actually mean.

                  • AuryGlenz 2 days ago ago

                    I understand your point (although I find the babybathwater-ing to be tiring), and I didn't mean to be drawn into a debate about this. But that was entirely the point - that there's a debate. Wikipedia would have you believe that there isn't.

                    For what it's worth, I'm mixed as hell. European, Asian, Jewish, north african, and native american. I look white, though - and I am, in fact, majority European ancestry. Therefore in most studies (of anything race related), I would presumably be lumped in with white people. It's not a perfect "measure," but it's still the easiest proxy for geographic location of our ancestors that we have and on a population level it works just fine for studies.

                • lobf 2 days ago ago

                  But then what are you arguing? Geographic location determines IQ? (An inherently flawed measurement itself)

                  • AuryGlenz 2 days ago ago

                    I'm not arguing anything other than the fact that Wikipedia is biased.

                    Though I will say it's beyond argument that geographic ancestry has an effect on IQ on a statistical group level (the reasons for this are what's debated), and that IQ is the best measurement of G that we have.

                    • lobf 2 days ago ago

                      Okay but you need to… actually present these arguments. Right now you’re stating your position and then affirming it as fact and expecting everyone to trust you.

                      • AuryGlenz 2 days ago ago

                        I already gave you two large meta-analyses and more on the first point along with a and as far as the second goes in the field of psychology that's as established as 2+2=4 is in the math world. If you really want to research that yourself go ahead; I don't feel like I should need to waste my time.

                    • lcnPylGDnU4H9OF 2 days ago ago

                      > I'm not arguing anything other than the fact that Wikipedia is biased.

                      It "is biased" to document human knowledge as accurately as possible. Is there something wrong with that?

                      • AuryGlenz a day ago ago

                        Because it's not accurate? As I and others have pointed out?

            • epgui 2 days ago ago

              Have you considered the possibility that your opinion is just not representative of the scientific consensus?

              • AuryGlenz 2 days ago ago

                I asked ChatGPT on whether or not it was the "scientific consensus."

                "Anonymous surveys of intelligence experts reveal division: a 2016 survey found that about 49% attributed 50% or more of the Black-White gap to genetics, while over 80% attributed at least 20%; an earlier 1980s survey showed similar splits. These views are more common in private or anonymous contexts, contrasting with public statements from bodies like the APA that find no support for genetic explanations."

                Hm, sure seems like Wikipedia should probably have a more balanced, nuanced discussion considering the experts are split at least 50/50.

                • lcnPylGDnU4H9OF 2 days ago ago

                  The "scientific consensus" the parent comment mentioned is referring to published studies, with data to back up their conclusions. The numbers you are citing seem to be from an opinion poll. Where did any of the 49% surveyed get the idea that "50% or more of the Black-White gap" can be "attributed" to genetics? What is their methodology for the attribution?

                  Bringing up an opinion poll as a counterpoint makes it read like you're arguing that Wikipedia should focus less on fact and more on opinion. Of course, you're free to think what you wish, but I suspect that's where most disagree.

                  • AuryGlenz a day ago ago

                    We don't really have "intelligence genes" mapped out, if they exist. Therefore, something like this, from Wikipedia: "Genetics do not explain differences in IQ test performance between racial or ethnic groups" is effectively a lie.

                    Genetics certainly don't explain all the differences in IQ. They very well might not explain the majority of the the difference. However, considering we know that intelligence is quite heritable along with various adoption and twin studies that have happened throughout the decades (along with simple freaking logic), we have a pretty good idea that it explains at least some of the difference. That "opinion poll," while not super-great because only some elected to reply, was a poll of experts in the fields that study this stuff, not random people.

                    A real unbiased article would mention that (and perhaps whatever counterarguments there are), not straight up do the encyclopedia equivalent of sticking their fingers in their ears and going "nah uh I can't hear you."

              • charcircuit 2 days ago ago

                Wikipedia does not care about scientific consensus. It just summarizes "reliable" secondary sources.

                • epgui 2 days ago ago

                  Wrong in two different ways:

                  - this tends to approximate consensus.

                  - Wikipedia does care, and has a policy on this: https://en.wikipedia.org/wiki/Wikipedia:Scientific_consensus

                  • charcircuit 2 days ago ago

                    >and has a policy on this

                    Look at the top of that page.

                    >This is an essay. It contains the advice or opinions of one or more Wikipedia contributors. This page is not an encyclopedia article or a Wikipedia policy, as it has not been reviewed by the community.

                    • epgui 2 days ago ago

                      That’s like arguing that “forming a queue at the store” is not an official policy.

                      The document outlines normative / prescriptive approaches that are followed in practice.

            • lobf 2 days ago ago

              >As you can see, Wikipedia is very dismissive to the point of effectively lying.

              Did I miss where you presented evidence that wikipedia is wrong? You seem to be taking an assumption you carry (race is related to IQ) and assuming everyone believes it's true as well, thus wikipedia is lying.

              • AuryGlenz 2 days ago ago

                There have been many, many studies that show that "race" is related to IQ. A true, unbiased article would show that as well as any well-founded criticisms of it.

                • lobf 2 days ago ago

                  Can you cite them then?

                  • AuryGlenz 2 days ago ago

                    Roth, P. L., Bevier, C. A., Bobko, P., Switzer, F. S., & Tyler, P. (2001). Ethnic group differences in cognitive ability in employment and educational settings: A meta-analysis. Personnel Psychology, 54(2), 297–330.

                    Rushton, J. P., & Jensen, A. R. (2005). Thirty years of research on race differences in cognitive ability. Psychology, Public Policy, and Law, 11(2), 235–294.

                    Neisser, U., et al. (1996). Intelligence: Knowns and unknowns. (APA Task Force report). American Psychologist, 51(2), 77–101.

            • erxam 2 days ago ago

              [flagged]

          • arjie 2 days ago ago

            I’d say Wikipedia definitely has a strong “woke” bent to it. Either in the language or the choice of what facts to show. Here’s an example I deleted that had been there for quite a while https://en.wikipedia.org/w/index.php?title=Salvadoran_gang_c...

            I really like Wikipedia, though, and I think over time we will get around to fixing it up.

            • klausa 2 days ago ago

              Why did you feel this passage was worth deleting?

              • arjie 2 days ago ago

                Anyone familiar with Wikipedia etiquette knows how to find the answer to this question. Rather than getting into an argument here about a subject there, I'd prefer you familiarize yourself with the norms of that community, and if you already have or are experienced with them, then you know where to discuss the subject guided by those norms.

                • scared_together 2 days ago ago

                  But you’re responding to a comment here, not there. So why not abide by the norms that prevail here?

                  • arjie 2 days ago ago

                    My experience is that we end up debating the norms because this forum has different views than Wikipedia itself. That’s interesting to some but not to me so I’m opting out.

                    In addition, the answer to the question is already available so I want any question asker to put in a little bit of effort and if they’re not going to do that then I’m not really interested in talking to them since I prefer peer interactions to tutorials.

          • gowld 2 days ago ago

            It's not errors of fact, it's errors of omitted facts.

            • ibero 2 days ago ago

              Are there actual good examples showing errors of omitted facts on Wikipedia that are verifiably correct, that demonstrate how it is "captured"?

            • decimalenough 2 days ago ago

              [flagged]

        • freehorse 2 days ago ago

          I can understand somebody not liking wikipedia, I cannot understand at all somebody, who is not Elon, liking/preferring "grokipedia" as idea or implementation.

          • scottyah 2 days ago ago

            > "grokipedia" as idea

            So you can understand someone not liking something, but you cannot understand that person liking the idea of an alternative? What is the idea for you if not just an alternative to the established service with the undesired part changed?

            • freehorse 2 days ago ago

              Because not liking something does not imply liking any possible alternative.

              Which one is the "undesirable part changed" here? Wikipedia is written by humans, it has a not-for-profit governance model, it encompasses a large, international community of authors/editors that attempt to operate democratically, it has an investment/commitment in being an openly available and public source of information. Grokipedia, on the other hand, is AI-generated, and operated by a for-profit AI company. Even if "grokipedia" managed somehow to get traction and "overthrow" wikipedia, there is no reason on earth why a company would operate it for free and not try to make profit out of it, or use it for their ends in ways much more direct than what may or may not be happening to wikipedia. Having a billionaire basically control something that may be considered "ground truth" of information seems a bad idea, and having AI generate that an even worse one.

              I can understand somebody not liking something in how wikipedia is governed or operating, after all whatever has to do with getting humans work together in such a scale is bound to be challenging. I can understand somebody ideologically disagreeing with some of the stances that such a project has to take eventually (even if one tries to be neutral as much as possible, it is inevitable to avoid some clash somewhere about where this neutrality exactly lies). But grokipedia much more than "wikipedia but different ideologically".

              edit: just to be clear, I see a critique of the "idea of grokipedia" as eg the critique of it being a billionaire controlled, AI generated project to substitute wikipedia; a critique of the implementation would be finding flaws to actual articles in grokipedia (overall). I think the idea of it is already flawed enough.

              • BurningFrog 2 days ago ago

                I'll spell out the argument:

                Wikipedia is fine for uncontroversial facts. The obscure ones can have individual mistakes but it's generally correct.

                For controversial topics, it's an eternal battle between factions of "volunteers" trying to present their view of a conflict. The articles reflect which side has the best organized influencer operations. Factual truth may or may not shine through, but as a side effect, not a result of the governing process.

                Grokipedia operates by Grok writing what it considers the true and interesting facts. That doesn't mean it's always right, but it's a model far less influenced by influencer operations.

                I wildly disagree with the critique based on the wealth of the top executive. I care about the truth and quality of the articles.

                • vharuck a day ago ago

                  >Grokipedia operates by Grok writing what it considers the true and interesting facts. That doesn't mean it's always right, but it's a model far less influenced by influencer operations.

                  If Grok is trained on a corpus of information written by humans trying to influence other humans, and it has no ability to perform its own original investigation in the real world, then how can it be anything but the product of influence?

            • wat10000 2 days ago ago

              Not all alternatives are necessarily worthy. I can understand someone not liking tomatoes. I can't understand someone liking depleted uranium.

              • hunterpayne 2 days ago ago

                Maybe ask a Ukrainian soldier which they prefer (modern armor is often made of depleted uranium). Environment shapes such preferences far more than personality.

              • bdangubic 2 days ago ago

                what do you have against depleted uranium? you know what they say, one man’s trash is another man’s treasure :)

            • debugnik 2 days ago ago

              They meant the idea of Wikipedia rewritten by Grok (or another controversial LLM) specifically, not just any alternative.

            • 2 days ago ago
              [deleted]
          • atonse 2 days ago ago

            > I cannot understand at all somebody, who is not Elon, liking/preferring "grokipedia" as idea or implementation.

            Really? Have you used AI to write documentation for software? Or used AI to generate deep research reports by scouring the internet?

            Because, while both can have some issues (but so do humans), AI already does extremely well at both those tasks (multiple models do, look at the various labs' Deep Research products, or look at NotebookLM).

            Grokipedia is roughly the same concept of "take these 10,000 topics, and for each topic make a deep research report, verify stuff, etc, and make minimal changes to the existing deep research report on it. preserve citations"

            So it's not like it's automatically some anti-woke can't-be-trusted thing. In fact, if you trust the idea of an AI doing deep research reports, this is a generalizable and automated form of that.

            We can judge an idea by its merits, politics aside. I think it's a fascinating idea in general (like the idea of writing software documentation or doing deep research reports), whether it needs tweaks to remove political bias aside.

            • chipotle_coyote 2 days ago ago

              > Have you used AI to write documentation for software?

              Hi. I have edited AI-generated first drafts of documentation -- in the last few months, so we are not talking about old and moldy models -- and describing the performance as "extremely well" is exceedingly generous. Large language models write documentation the same way they do all tasks, i.e., through statistical computation of the most likely output. So, in no particular order:

              - AI-authored documentation is not aware of your house style guide. (No, giving it your style guide will not help.)

              - AI-authored documentation will not match your house voice. (No, saying "please write this in the voice of the other documentation in this repo" will not help.)

              - The generated documentation will tend to be extremely generic and repetitive, often effectively duplicating other work in your documentation repo.

              - Internal links to other pages will often be incorrect.

              - Summaries will often be superfluous.

              - It will love "here is a common problem and here is how to fix it" sections, whether or not that's appropriate for the kind of document it's writing. (It won't distinguish reliably between tutorial documentation, reference documentation, and cookbook articles.)

              - The common problems it tells you how to fix are sometimes imagined and frequently not actually problems worth documenting.

              - It's subject to unnecessary digression, e.g., while writing a high-level overview of how to accomplish a task, it will mention that using version control is a good idea, then detour for a hundred lines giving you a quick introduction to Git.

              As for using AI "to generate deep research reports by scouring the internet", that sounds like an incredibly fraught idea. LLMs are not doing searches, they are doing statistical computation of likely results. In practice the results of that computation and a web search frequently line up, but "frequently" is not good enough for "deep research": the fewer points of reference for a complex query there are in an LLM's training corpus, the more likely it is to generate a bullshit answer delivered with a veneer of absolute confidence. Perhaps you can make the case that that's still a good place to start, but it is absolutely not something to rely on.

              • dyates 2 days ago ago

                >LLMs are not doing searches, they are doing statistical computation of likely results.

                This was true of ChatGPT in 2022, but any modern platform that advertises a "deep research" feature provides its LLMs with tools to actually do a web search, pull the results it finds into context and cite them in the generated text.

              • atonse 2 days ago ago

                That's not at all been my experience. My experience has been one of constant amazement (and still surprise) when it catches nuances in behavior from just reading the code.

                I'm sure there are many variables across our experiences. But I know I'm not imagining what I'm seeing, so I'm bullish on the idea of an AI-curated encyclopedia, whether Elon Musk is involved or not.

            • freehorse 2 days ago ago

              No, I don't trust an encyclopedia generated by AI. Projects with much narrower scopes are not comparable.

              edit: I am not very excited by AI-generated documentations either. I think that LLMs are very useful tools, but I see a potential problem when the sources of information that their usefulness is largely based on is also LLM-generated. I am afraid that this will inevitably result in drop in quality that will also affect the LLMs themselves downstream. I think we underestimate the importance that intentionality in human-written text plays in being in the training sets/context windows of LLMs for them to give relevant/useful output.

            • 2 days ago ago
              [deleted]
          • psyklic 2 days ago ago

            Elon at some point threatened to have an LLM rewrite all of the training data to remove woke. I assume Grokipedia is his experiment at doing this (and perhaps hoping it will infect other training sets too?) ...

        • Rover222 2 days ago ago

          I appreciate you

      • Rover222 2 days ago ago

        Wikipedia obviously is left leaning.

        • hananova 2 days ago ago

          Well yes, but so is reality. And Wikipedia as an encyclopedia is supposed to document reality. So what's the problem?

          • Rover222 2 days ago ago

            That's an interesting take. Left or right leaning is kind of just relative to society as a whole. If the world really was so left, I think we'd be calling Wikipedia neutral.

            • hananova a day ago ago

              But many people do call Wikipedia neutral, or at least mostly neutral.

          • beeflet 2 days ago ago

            [flagged]

            • brokencode 2 days ago ago

              Have you ever wondered why the most educated and scholarly people in the country are left leaning?

              I suppose you think they were indoctrinated. But finding and teaching the truth is essentially their job. Learning how to evaluate sources and approach research logically is like academia 101.

              So doesn’t it seem strange that so few of them ever manage to see that they’re being indoctrinated?

              Or do you think a person’s political beliefs are assigned at birth and lefties just like academia for some reason?

              • beeflet a day ago ago

                I have never wondered that because it is caused by many obvious factors.

                >So doesn’t it seem strange that so few of them ever manage to see that they’re being indoctrinated?

                I don't think most people care. They are primarily interested in career progression, social status, protecting the feelings of their peers.

                They are willing to accept whatever ideology that occupies the water that they swim in. If the status quo was right wing, they would adopt the views of the right wing.

                Beyond academia 101, you use that fundamental understanding of the scientific process to break the system, work backwards to justify your conclusion, p-hack, ensure grants and scholarships go to the correct people and whatever else it takes to succeed in that environment. I went to college, I've seen it happen in front of me.

                It's the academic's job to find and teach the truth in the same way that it's the mechanic's job to fix your car. But the mechanic's incentives drive him to upsell you, charge you for work you don't need, and hell if he breaks something in the process you'll be coming back a lot sooner. So too it is the job of the knowlege worker to create more knowlege work.

                >Or do you think a person’s political beliefs are assigned at birth and lefties just like academia for some reason?

                I think that when a "normie" is told to imagine their future, they literally imagine themselves. That is to say, they don't imagine the mechanics of what their daily routine would be or imagine what values and strengths they would have; they imagine their future in the same way that they look at themselves in a mirror. There is a literal self-image involved.

                They subscribe to a certain aesthetic (say, an upper-class aesthetic, or an artistic aesthetic, or a blue collar aesthetic or a military aesthetic) based on if they think it looks cool and then they work backwards to figure out what beliefs, values, strengths, etc. they need to fit in with society and play a certain designated character which is probably inspired by something they saw on TV. I am not joking.

                So if you adopt, say, a punk rock style you would need to act rebellious to fit in, even if you do not feel the innate urge to do so based on your life experiences. When you enter the mosh pit it is like a safe, culturally designated, controlled aggression. Like sports. Because this style is associated with a certain type of rebellion you would get roped into leftist stuff. And because you become leftist, you naturally want to go to college to fraternize with more leftist types. The whole admission process is designed to filter out students based on their personality, not their tenacity for learning.

                You can imagine what the reverse of this would look like for someone adopting right-wing associated aesthetics and culture. There are even right wing academic groups, controlled by a narrower overton window due to their weakness in the academic and bureaucratic domain.

                None of this has to do with evaluating sources and approach research logically. Do you believe that people come out of the womb with a lifelong desire to pursue the truth?

                This whole system of acculturation just seems too fake, orderly, and planned for me. Furthermore, I think it will ruin the country because it is too individualistic. In an ideal world, your political beliefs would stem from a combination of your life experiences and a practical analysis of the demands of your time.

                It's treating politics like some sort of sports team rather than something strategic and decisive.

                • brokencode 13 hours ago ago

                  That’s quite a cynical way of looking at the world. That political belief is driven by conformance into social groups rather than an individual’s desire to seek the truth.

                  If that’s truly what you believe, then I guess I’ve got no argument that could sway you. In fact, using your view of the world, all political debate is worthless because nobody really seeks an objective truth anyway.

            • baublet 2 days ago ago

              Are you suggesting that academia and all of the other actual places people who learn and know stuff for a living being full of leftists is some conspiracy against you and the right wing?

              Touch grass, my dude. These are the thoughts of someone who spends too much time on X.

              • beeflet 2 days ago ago

                I don't use X or Grok.

                The fact that the elite and knowledge workers in this country are generally more left-leaning is pretty evident. The right wingers in these ranks make up a distinctive subgroup. These are the thoughts of pretty much everyone everywhere in the country and this becomes apparent if you ask randoms on the street, or you have attended college lectures, or have used a dictionary, or have read wikipedia talk pages, or have compared news sources.

                Being a welder or a farmer or a carpenter requires "learning and knowing stuff". These right-wing associated jobs just don't produce knowledge for other people as an end product. That is what makes knowledge work an elite position; not everyone has the luxury of doing knowledge work.

                I take issue with your implication that we should all bow down to knowledge workers because they know better. The knowledge workers are the issue here. They are the subject of discussion. This is like when the police investigate themselves and find no wrongdoing.

                If we found that the population of professional athletes became dominated by a certain cultural ingroup, and eventually we failed to bring home gold medals at the olympics, we might be correct to question the state of our meritocracy wrt athleticism. Regardless of what people who "improve and use their bodies for a living" think.

                The US is losing intellectually and technologically to countries like China. I call into question the general legitimacy of our academic and journalistic institutions.

                • adi_kurian 2 days ago ago

                  You know the trades are union, i.e., left wing?

                  If you went to my hometown and asked working class men who work with their hands whether they support the conservatives, they'd laugh in your face.

                  Have you heard of the AFL-CIO?

                • nixon_why69 2 days ago ago

                  > The US is losing intellectually and technologically to countries like China. I call into question the general legitimacy of our academic and journalistic institutions.

                  China has tons of green energy, high speed rail and frequently extols the virtues of socialism.

                  • beeflet a day ago ago

                    The success of chinese infrastructure validates my concern, which is with a specific faction of the american/western elite

                    • nixon_why69 16 hours ago ago

                      Bro, I sympathize a little bit but looking to illiterates who hate green energy is not the solution. We could probably agree that in China, engineers and scientists are more listened to but the solution for America is not "less reading".

                • kartakrak 2 days ago ago

                  hn is left leaning too

                  even pg tries to shit on elon on x time to time

                  funniest bit was garry tan hinting that yc x and y seasons has to be renamed because pg doesnt like it

            • 2 days ago ago
              [deleted]
      • annexrichmond 2 days ago ago

        Are really suggesting everything in Wikipedia is truthful, complete, and free of all biases?

        • hananova 2 days ago ago

          Maybe not all of it, but a vast majority of it is. And almost certainly the parts that drove Elon to slopify it are true.

          • annexrichmond 2 days ago ago

            Citation needed.

            • hananova a day ago ago

              That's not how it works. You're making the extraordinary claim that a widely trusted and strictly moderated encyclopedia with tons and tons of citations to back up the truthfulness of its contents is not mostly true. You get to prove that assertion, since your claim is the extraordinary one.

              • salad-tycoon 13 hours ago ago

                Epstein files. State actors, company security departments, activists, etc influence and seemingly control the more meaningful/controversial Wikipedia sections.

                I think it’s just an inherent flaw in ANY centralized and universal repository of knowledge.

                I haven’t actually ever been on grokipedia but I’m sure Elon influences it, I mean if I paid for something I’d expect it to be to my liking too.

        • comicjk 2 days ago ago

          Not everything on Wikipedia is true, but the parts Elon Musk hates most are probably true.

          • annexrichmond 2 days ago ago

            [flagged]

            • scared_together 2 days ago ago

              Not sure if this is an example of something Musk hates, but here’s a paragraph from the “2016 presidential campaign” section of the Donald Trump article on Wikipedia.

              > Trump's FEC-required reports listed assets above $1.4 billion and outstanding debts of at least $265 million.[140][141] He did not release his tax returns, contrary to the practice of every major candidate since 1976 and to promises he made in 2014 and 2015 to release them if he ran for office.[142][143]

              I could not find any mention of tax returns on the Donald Trump page of Grokipedia.

              Wikipedia:

              https://en.wikipedia.org/wiki/Donald_Trump

              Grokipedia:

              https://grokipedia.com/page/Donald_Trump

            • mbs159 2 days ago ago

              Well, you yiyrself did not provide any sources for asserting the argument that some of what is on Wikipedia is false

        • brokencode 2 days ago ago

          No, when did I say that? That’s impossible for anything of the size of Wikipedia.

          I was suggesting that Elon Musk, a man who has donated hundreds of millions to Trump and other Republican causes, who has numerous financial conflicts of interest, and who has publicly lied numerous times, is never going to produce a more unbiased and factual encyclopedia than Wikipedia.

          Especially when his effort to do so is essentially AI slop from a third rate LLM on top of his own biases.

          • annexrichmond 2 days ago ago

            Right, and Wikipedia leadership is free of any conflicts of interest:

            During and after her Wikimedia role, Maher drew fire for statements perceived as rejecting objective truth or Wikipedia’s traditional “free and open” model:

            • In a 2021 TED Talk, she described reverence for truth as potentially a “distraction” hindering common ground.

            • She called Wikipedia’s free-and-open ethos a “white male Westernized construct” that excluded diverse communities.

            • brokencode 2 days ago ago

              But she’s not the one writing and editing the wiki pages. It’s open, and there are open discussions as well. What’s it matter what she thinks?

              Why would I ever trust an encyclopedia totally controlled by one megalomaniac over one that I myself can contribute to?

        • 2 days ago ago
          [deleted]
        • 2 days ago ago
          [deleted]
      • tclancy 2 days ago ago

        [flagged]

    • notahacker 2 days ago ago

      Twitter's communication style being based around brevity, slang, memes, spam and non-threaded conversations seems particularly unlikely to be helpful for optimising LLMs

      • tclancy 2 days ago ago

        >Twitter's communication style being based around brevity

        Is this still true? Every once in a while someone sends a link around to some madman explaining how race or economics or whatever "really" works and it's like a full dissertation with headings, footnotes, clip art. They're halfway to reinventing Grok-o-pedia right there in Twitter. I mean X. I was promised that "X gonna give it to you" but it turns out "it" is some form of brain chlymidia.

        • 3rodents 2 days ago ago

          Elon was running some sort of $1m competition for the “best” Twitter post for a few months. I think those type of dissertations about Phrenology and the like have fallen off a cliff since the competition ended.

          • tclancy 2 days ago ago

            Ooohhhh. I am both glad and horrified to know this. Not how Seneca told me life would be when I learned things.

        • delecti 2 days ago ago

          There's probably a selection bias involved. I haven't been a regular user for a while now, but the big threads like that were significantly outnumbered by individual posts. Meanwhile I'm not likely to send a link to someone of a single single-sentence tweet, because there's not enough meat to it. The stuff that could be shared would usually be an image from the tweet, which I could share directly.

      • aleph_minus_one 2 days ago ago

        > Twitter's communication style [...] seems particularly unlikely to be helpful for optimising LLMs

        This depends on what one wants to optimize the AI for. ;-)

      • libertine 2 days ago ago

        And the amount of bots there isn't helpful either.

        • facemelt2 2 days ago ago

          recent changes in their comment system have reduced my exposure to bots to a level I much prefer over every other platform I use

          • tanjtanjtanj 2 days ago ago

            How recent? As recently as last weekend I was seeing blue check marks replying with AI generated only-technically-related replies on top of the majority of the posts I looked at.

          • libertine 2 days ago ago

            If that's actually true, good for them, but after what I've witnessed there not that long ago, I doubt I'll try it ever again.

          • rvnx 2 days ago ago

            There are bots here too, lot of them, to a point that rules were amended, this is because it's very valuable to give points to new publications

    • UncleOxidant 2 days ago ago

      > Giant waste of time while Anthropic/OAI keep surging forward.

      And Google. They're quietly making a lot of progress in the coding space with antigravity and Gemini 3.1.

      • koakuma-chan 2 days ago ago

        Has Antigravity gotten any better?

        • sunaookami 2 days ago ago

          It has gotten worse and they tightened the limits for paying customers recently: https://x.com/antigravity/status/2031835833716625883 (only announcement on Twitter, not in the app nor via email)

          • kivle 2 days ago ago

            Limits are so low that I cancelled after about two weeks on my initial $0 trial. I tried making a change to a tiny code base with Claude Sonnet (which they offer in Antigravity). It couldn't even finish the change before my weekly limit was used up, reset in 7 days.

            • koakuma-chan 2 days ago ago

              to be fair you shouldn't expect them to subsidize Anthropic models. what about limits for gemini?

              • kivle 2 days ago ago

                I tried the Anthropic models because gemini-pro had already been rate limited with a 5 day wait. I got some actual usage out of the Google model, but laughably little compared to what I got with ChatGPT Plus. This is definitely not an imagined thing from my side, you just have to look at the Antigravity forums:

                https://discuss.ai.google.dev/new

        • UncleOxidant 2 days ago ago

          I find it pretty good. And Gemini 3.1 pro seems quite capable. Not as good at some things as Claude, but better at others. I was trying to target a verilog design to an uncommon FPGA and board and Gemini went out and searched for the FPGA docs and examined the schematics for the board in able to do the pin assignments (generated .ccf file). Not sure of Claude could've done that.

        • htrp 2 days ago ago

          >There is currently no support for:

          >Bring-your-own-key or bring-your-own-endpoint for additional rate limits >Organizational tiers in general availability, or via contract[1]

          Literal clown car product.

          No plan for serious enterprise support (even 6 months after launch)

          [1]https://antigravity.google/docs/plans

        • BoredPositron 2 days ago ago

          Probably the best value for a good amount of anthropic credits. You can also share your Google ai subscription with up to four family members and they all get the same amount of credits...

    • jmspring 2 days ago ago

      Twitter has the mass adoption, and it takes an effort to avoid bot/particular view bias - but as a valuable content source, it's a far cry from what it once was before Musk took it over.

    • ben_w 2 days ago ago

      > Feel like the canary was when Grokpedia became a project. Giant waste of time while Anthropic/OAI keep surging forward.

      Really? I assumed that that whole thing was just a very direct `for each article in Wikipedia { article = LLM(systemprompt, article) }`

      Agree re Twitter "good" != valuable.

      • sroussey 2 days ago ago

        Where system prompt lists a certain someone’s latest tweets.

    • sheepscreek 2 days ago ago

      AFAIK Grok still doesn’t have a CLI coding agent that works with a subscription. That’s a shame. Grok Code Fast 1 was pretty impressive when it came out - for what it did, and they never followed it up with a new version.

      • sroussey 2 days ago ago

        You can use cursor with grok, though my experience is that grok is the worst of the API providers cursor supports.

    • laidoffamazon 2 days ago ago

      As someone trying to monitor the situation using Twitter the last few weeks it’s awful and it used to not be!

      • Rover222 2 days ago ago

        It’s flawed, but still the obvious place to monitor a situation.

        • rchaud 2 days ago ago

          It's long been taken over by Telegram, which among its other advantages (more like a message board than 'town square'), doesn't have hordes of people commenting "@grok explain this to me" under every post.

          • Rover222 2 days ago ago

            I've never even heard of telegram competing with X for live world events updates. But maybe I'm just missing out.

    • BurningFrog 2 days ago ago

      Grok is trained on pretty much the same giant web crawl/text corpus as the other AIs.

    • giancarlostoro 2 days ago ago

      > but I cannot imagine it's a valuable dataset.

      It's going to be a mixed batch, but any time there's world events, since as far back as I can think, Twitter (now X) was always first in breaking news. There's plenty of people and news orgs still on X because they need to be for the audience.

    • samrus 2 days ago ago

      Twotter as a data source is interesting. I think it gets over hyped because thats elons grift. But i cant deny that the real time info aspect of it is pretty valuable. But i definitely think that its not that much more valuable than the open internet from a context source perspective. Everything worthwhile on twitter will end up elsewhere with a bit of lag. And the stuff that wont is noise anyway

    • vibeprofessor 2 days ago ago

      [dead]

    • EGreg 2 days ago ago

      I'm not a fan of Elon's software endeavors, ever since he bought Twitter and turned it into an even worse cesspool of angry political nonsense than it used to be. I don't like how he's been biasing Grok, etc.

      But, what exactly is so bad about Grokipedia? It's a different approach and I think a valid one: trying to do with AI what people have been doing manually at Wikipedia. I'm curious to hear the substantive comparisons.

      • kennywinker 2 days ago ago

        I think the issue is simply this: wikipedia trends towards unbiased info through use of the crowd. Grok, with a single owner with an ax to grind, trends towards whatever elon wants. It’s poisoned information under the control of one man - cyberpunk novels have been written about less.

        • wat10000 2 days ago ago

          A concrete example: a few weeks ago, Musk was making a big deal about how most of his massive net worth was not held in cash, and by a total coincidence the phrase "primarily derived from equity stakes rather than cash" showed up on his Grokipedia page in the section about net worth. I checked the pages of several other extremely wealthy people and none of them had such a comment.

        • tmp10423288442 2 days ago ago

          > wikipedia trends towards unbiased info through use of the crowd

          See, this is why people even give a project like Grokipedia the time of day. While in theory anyone can edit Wikipedia, in practice the moderators form a much smaller and weirder cabal, and they reject edits that go against their views. The frustration with the naive assertion that Wikipedia distills the wisdom of the crowds with the reality of Wikipedia on any page of note is what provides the psychic permission to even entertain a project with such obvious flaws as Grokipedia.

          • kennywinker 2 days ago ago

            > and they reject edits that go against their views

            Citation needed. See what i did there ;)

            They reject edits that go against their views on tone and sourcing not political views that i am aware of - i am sure it happens from time to time but unless there’s a consistant bias in one direction this isn’t a valid criticism of the political neutrality of wikipedia.

            Even if there is rampant bias in wikipedia, that’s a reason to fork it and change the structure and gatekeeping - not to replace it with a techno-authoritarian ai version controlled by a single billionaire. That’s amplifying the problem from an aggregate bias of 600,000 users who have made an edit in the last 30 days[1] to just one editor who uses ai to make it seem impartial.

            [1] https://expandedramblings.com/index.php/wikipedia-statistics...

            • tmp10423288442 2 days ago ago

              I would prefer to fork Wikipedia as well, but in practice I don't think that works, given the many failed Wikipedia forks of the past 20 years. On the internet, the only way to get any alternative to a widely-used source like Wikipedia is to use a significantly different approach. Otherwise, you just look like a cheap knockoff, even to people who might otherwise agree with your approach. Worse is better, after all - worse in most ways, but better or different in at least one innovative way.

              • kennywinker 2 days ago ago

                Well, here’s hoping grokpedia goes and joins the rest of the failed attempts.

      • Avshalom 2 days ago ago

        >>I don't like how he's been biasing Grok, etc.

        >>But, what exactly is so bad about Grokipedia

      • sumeno 2 days ago ago

        It's controlled by a guy who spends all day retweeting white supremacists and lying about his companies. Why should anyone who isn't a white supremacist use it?

        • baublet 2 days ago ago

          They would not. The do not.

  • moogly 2 days ago ago

    I feel xAI is just a very big version of the Boring Co. "flamethrower": an unserious endeavor which is just a reskinned existing tool (it was a reskinned weed burner), but people were wowed by it anyway, since Musk was behind it, and they all pretended it was something new and notable.

    The burning (heh) question is which SpaceX subsidiary will fail first, xAI or Tesla (not yet a subsidiary, but it's written in the stars (heh))?

    Then again SpaceX is also jumping the shark what with their orbital data centers (remember those?).

    Might be time to start a new Musk company soon.

    • 1vuio0pswjnm7 2 days ago ago

      "Might be time to start a new Musk company soon."

      This made me laugh

      How mamy times have we seen HN comments something like, "He started/runs [number] companies..." therefore he is a genius

    • 2 days ago ago
      [deleted]
    • softwaredoug 2 days ago ago

      Crazy thing is if Hormuz stays closed, Tesla could theoretically be positioned to reap massive benefits. But they seem to be moving away from consumer electric cars as their focus.

      • LtWorf 19 hours ago ago

        With rolling blackouts and rationed fuel?

  • Sol- 2 days ago ago

    I don't use it myself, but I feel like the way Grok is integrated into Twitter is a pretty good thing for discussions, as it is certainly a more objective and rational voice than most human participants. I think it's good that people tag @grok if they don't understand something or want an opinion, even if it looks pretty silly to see "@grok is this true" repeated multiple times in replies.

    That said, Musk's attempts at misaligning the thing and make it prefer his opinions of course destroy any trust. It's surprising that it's seemingly as good and helpful as it is despite the corruption attempts.

    I also don't quite get how the business model is supposed to work out if its main usecase is to serve Twitter. I know they provide API access as all other models, but with how distrusted Musk is and how sensitive of a topic reliable model behavior is, they seem to sabotage themselves. Which company wants it to go mechahitler on them?

    • biggestfan 2 days ago ago

      I disagree, I find that the grok replies are terrible product UX. Not only do they clog up the replies of every popular post, they're also constrained to extremely short answers with no sources. The community notes system, while also flawed in its own ways, is at least not nearly as disruptive and usually provides a link.

      Trying to make social media a source of truthful information is always an uphill battle and doubly so for X.

      • visarga 2 days ago ago

        I like you can ask Grok to search the social graph and comments. Hacker news also has one semantic search engine (https://hackersearch.net/), reddit has none and it's a pity.

    • jjfoooo4 2 days ago ago

      I’m really, really uninterested in reading AI content that other people have generated. If I’m on Twitter, I’m looking for what humans have to say.

      • mbrochh 20 hours ago ago

        I'm afraid that ship has long sailed. The humans left on Twitter are all just copy pasting AI slop now...

    • createaccount99 2 days ago ago

      > Grok is integrated into Twitter is a pretty good thing for discussions, as it is certainly a more objective and rational voice than most human participants

      Hard agree.

    • daveguy 2 days ago ago

      Grok is a bot that:

      1) sometimes goes mechahitler

      2) was trained to be biased against empathy and understanding (because woke).

      3) is customized to spout Elon's opinions as fact.

      Claiming it is "objective and rational" seems like a misjudgement to me. If it really is more objective and rational than the average xitter poster, that says more about that platform than it does about Grok.

      • Sol- 2 days ago ago

        I guess I was mostly arguing that the integration of something like Grok into Twitter was definitely a net positive for online discussion, as anyone has a fact checker and explainer at hand now to diffuse irrational online arguments.

        Also I think you overrate Musk's success in fiddling with the model. As I have written, I also don't like his attempts to tune it to his tastes, but if you see the outputs that people get from Grok, it seems mostly fine except in the specific scenarios that Musk seems to have focused their misalignment on.

        Of course something like Claude being integrated into Twitter would likely be better.

        • daveguy 2 days ago ago

          He doesn't have to fiddle with the model because he gets to inject his own opinion into the context MitM style.

          But I get what you're saying now, a fact checker available to query during an online discussion would be helpful. Assuming the checkerbot was actually independent/neutral and backed responses with sources. Definitely not assumptions you can make with grok.

      • tootie 2 days ago ago

        It was also producing CSAM on demand for a few months.

        • Tadpole9181 2 days ago ago

          It still is, you just need to pay.

      • ozozozd 2 days ago ago

        You’re right. But it appears they may have failed with 2) and 3) because I frequently see Grok spit out content that doesn’t agree with the creators’ narrative.

      • andai 2 days ago ago

        From what I heard it was designed to prefer truth over political correctness. I don't use Grok or Twitter though so I cannot comment on whether that aim was achieved (or even seriously attempted).

        I will however note that when I asked ChatGPT for an LLM prompt for truthfulness, it added "never use warm or encouraging language."

        It would appear that empathy and truth are in conflict — or at least the machine thinks so!

      • Sohcahtoa82 2 days ago ago

        > 1) sometimes goes mechahitler

        That "MechaHitler" episode lasted less than a day.

        > 2) was trained to be biased against empathy and understanding (because woke).

        No, it was trained and instructed to be truthful, even if the truth is deemed politically incorrect.

        > 3) is customized to spout Elon's opinions as fact.

        Certainly a nugget of truth there.

        > Claiming it is "objective and rational" seems like a misjudgement to me.

        I do believe it's generally objective, simply due to the fact that despite how much Elon tries to push it to the right, it still dunks on right-wingers all the time when they summon Grok to back up a bullshit story, but Grok debunks it instead.

      • 2 days ago ago
        [deleted]
    • zemo 2 days ago ago

      respectfully, I do not find Mecha Hitler to be particularly free of bias.

  • twodave 2 days ago ago

    Used Grok for the first time, in a Tesla, and for that purpose it actually made a lot of sense. It’s very well-integrated into the car’s systems and communication style while driving tends to be very tweet-esque. I think this is the niche they should lean into more (live assistant, e.g. Jarvis type stuff) and leave the more agentic niche to folks like Anthropic. Maybe even delegate more difficult or background tasks to those sorts of models. As a verbal interface I found it pretty pleasant.

    • dkobia 2 days ago ago

      I thought Grok in the car was awesome until it went off on a tangent and started praising Elon.

    • SaltyBackendGuy 2 days ago ago

      I am honestly a bit disappointed it couldn't do basic things, like play X on Spotify. To be fair, I accidentally activated Grok for holding the voice command button too long (which is another UX issue - i.e. 2 voice command interfaces).

      • MetaWhirledPeas 2 days ago ago

        It'll get there. Initial implementation was just talk to Grok. Now it has improved to allow adjustments to navigation routes.

        • 2 days ago ago
          [deleted]
        • tombert 2 days ago ago

          I mean, even Google Home and Alexa could handle playing a song on Spotify by me asking for it a decade ago. It's baffling that wasn't one of the first things implemented in Grok for Tesla.

          • MetaWhirledPeas 2 days ago ago

            The built-in assistant already does a great job of this, so whatever they do with Grok they'll want it to be better in some way. Like, "help me find songs similar to..." or "help me find that one song by that one guy with these lyrics".

    • andai 2 days ago ago

      What's the difference between Jarvis and agentic?

      • twodave 2 days ago ago

        Is this a serious question? In my view a live assistant differs from an agentic model in that it is a conversational interface, interconnected with live data feeds and able to exert some control over the physical environment. Can agents do some of those things? Sure. But this isn’t the primary intended application for agentic tech, which focuses on running longer tasks unattended. And yes, Jarvis could make use of background tasks himself, and I think that’s a major part of his value. But it doesn’t have to be Jarvis actually performing those tasks, just directing them. Let the agents do the agentic things, and let the more conversational models interface with the humans.

    • darkwater 2 days ago ago

      Grok in Tesla is utterly terrible, a rushed out product with a very bad UX. As a simple example, it's the very first feature in Tesla's UI that does not come translated to the UI language set by the user but it's just available in English. Never happened before.

      • winrid 2 days ago ago

        Vibe coded without remembering to tell it to use the localization system? :)

  • nemothekid 2 days ago ago

    While I believe Grok was a decent model (in some of our internal use cases it performed the best until Gemini 2.5-pro came out), I can't help lament how the team chose to run.

    xAI (and Twitter) was the loudest about six-hour workdays, sleeping in the office, and always shipping. ~2 years later it feels like they have nothing to show for it. I'm sure the engineers at Google worked 4 days a week, 2 hours a day, with half of that being spent at the Google cafeteria and they dusted xAI years ago.

    • charlierguo 2 days ago ago

      > I'm sure the engineers at Google worked 4 days a week, 2 hours a day

      Why are you sure of that? Anecdotally everyone I know in and around Google Deepmind works incredibly hard.

      • nemothekid 2 days ago ago

        No disrespect to the Google Deepmind team, but I meant it as a meme. I do not believe most Google employees work 2 hours a day.

        The Google Deepminds are incredibly smart - I just find it important to point out that the xAI guys spent a year assured they would beat Google because they slept in tents that they made in the office.

      • Analemma_ 2 days ago ago

        There’s a longstanding meme that Google is full of rest-and-vesters. Maybe it’s true in some departments, but I also have anecdotes that in GDM and other AI-related stuff, people are acutely aware of the existential threat of losing to OpenAI and have the appropriate amount of hustle.

        • leoh 2 days ago ago

          It really doesn't feel like that and hasn't for years

    • VirusNewbie 2 days ago ago

      Anyone Google has hired in the last ~8 years was hired onto a team that is growing and has a culture of shipping and producing. Google regularly weeds out low performers, be it new grads or long timers who started doing the rest and vest thing.

      Now, I don't think most people at google are literally driving to the office or sleeping there most of the time, you'll certainly have more WLB than xAI.

      I'd even say, Google is much better at calibrating the right amount to push people than some other companies.

    • basisword 2 days ago ago

      It's almost like burning people out is a bad idea. Fair enough if you're working 12 hour days as employee 1 at a startup but when your boss has more money than God and is working you like a dog you're not going to keep that up (especially when all of those people probably have much better opportunities available to them at the drop of a hat).

  • causalzap 2 days ago ago

    The irony is that while Wikipedia faces criticism for bias, it remains one of the few massive-scale sites with a clean internal link structure that doesn't feel manipulated by modern SEO 'clustering' tactics. For developers, their API is still a masterclass in how to serve structured data to the public.

    • serioussecurity 2 days ago ago

      Only from disingenuous folks trying to control them.

  • dang 2 days ago ago

    Recent, related, and apparently ahead of the curve:

    Ask HN: What Happened to xAI? - https://news.ycombinator.com/item?id=47323236 - March 2026 (6 comments)

  • Animats 2 days ago ago

    “Orbital space centres and mass drivers on the Moon will be incredible.” - Musk

    Right.

    The product is the stock. TSLA: [1] Up by 3x in the last two years, despite no new models, the Cybertruck failure, the Robotaxi failure, the large truck failure, and an overall decline in sales. How does he do it?

    It's a concern seeing Space-X, which builds good rockets, drawn into the X and AI money drains. Space-X is needed. If X and X/AI tanked, nobody would care.

    [1] https://www.cnbc.com/quotes/TSLA

    • stephbook 2 days ago ago

      > How does he do it

      He always promises something 3 years into the future. He uses the new money to keep the old stuff afloat. He mad SpaceX buy Tesla Cybertrucks when noone else wanted them. Now buy data centers.

      Tomorrow he'll have a new idea, a new snake oil, new investors and his "BEAM transportion" company, totally real in 3 years, will buy shitty space data centers noone else has a use for.

    • codemog 2 days ago ago

      Greatest hype man of all time and shows how whacked out reality and economics are.

    • thinkcontext 2 days ago ago

      If I was a SpaceX investor I'd be considering litigation. Saying the core product has to be rebuilt right after it gets bought by SpaceX?! Maybe the SpaceX investors would have liked some diligence about that before purchase but looks like someone had a conflict of interest about that.

      • Animats 2 days ago ago

        Space-X and x/AI are both privately held.

        But this may mess up the proposed IPO.[1]

        By completing the SpaceX–xAI deal while both companies remain privately held, and now closed, Musk can effectively set relative valuations, negotiate terms within a founder‑controlled ecosystem, close, and then inform investors, without the procedural drag and disclosure obligations that attend a public‑company merger. That flexibility can reduce near‑term execution friction. It does not, however, eliminate fiduciary exposure; rather, it may defer scrutiny to the IPO phase, when investors and regulators will examine how and why the combination occurred, how it was priced, and how related‑party dynamics were managed.

        [1] https://www.dandodiary.com/2026/03/articles/director-and-off...

    • sroussey 2 days ago ago

      You had the answer right there… SPCX will be the product, what they make will no longer matter.

    • cedws 2 days ago ago

      I suspect the xAI merge was a manoeuvre to pump SpaceX whilst actually quietly beginning to scale down xAI. It’s a money losing venture.

      • rhubarbtree 2 days ago ago

        SpaceX is an incredible business and too important to fail. By rolling his other businesses into it, Elon protected them from failure.

        Something to admire is his ability to always find the chess move. Like, you could see Twitter is a disaster of a business that should have dragged Elon down, but he manoeuvred his way out of it.

  • numbers_guy 2 days ago ago

    Unfortunate. The Grok team built a phenomenal model. I use it all the time and it very often out performs GPT and Claude, on coding and STEM research related tasks. I was part of the beta for a while Grok 4.2 Beta with multi-agents and it was just amazingly good.

    People aren't using it for reasons other than its capabilities. I mean, I don't think my boss would approve a paid Grok subscription for example.

    • distances 2 days ago ago

      > People aren't using it for reasons other than its capabilities.

      This is very true. I have no idea how it performs, as I wouldn't use it even if I was paid for that. Wouldn't matter if it was the best model available, in my view the name is so thoroughly tainted by now that you would get a reputational hit just by admitting to use it.

    • ryandrake 2 days ago ago

      > People aren't using it for reasons other than its capabilities.

      This is a fact of life, though. "Who created it" is a valid and common reason to rule out using a particular product, even one with objectively good quality.

    • virgildotcodes 2 days ago ago

      Have you tried the 5.3 Codex Xhigh, 5.4 Xhigh, Opus 4.6, Gemini 3.1?

      All of them (even Gemini, the worst of the bunch) far outclass Grok on everything I've thrown at them, especially coding.

      Grok is good at summarizing what's happening on twitter though.

    • qingcharles a day ago ago

      I use it because it is easily jailbroken and is willing to search for old orphan magazine PDFs I'm trying to track down. The subagents will all scream "this is copyright violation!" but the main Grok engine ignores them and finds obscure, niche forum posts etc.

      So, it has its uses compared to the mainstream products.

    • ActorNightly 2 days ago ago

      There is no way in hell Grok is better than Gemini. Google has the advantage of much more efficient and faster inference, with a lot more data sets.

      Secondly, would you trust a model, especially for STEM research, that consistently has training loops done on it to make it to adhere to what only Musk considers as truth?

      Honestly, comments like yours really make me super suspicious of whether you are a bot or not.

    • lvl155 2 days ago ago

      My experience was quite different. It was on par with open source models from China (and it was priced as much) and could never replace Sonnet/Opus/GPT5.x.

    • dudeinhawaii 2 days ago ago

      I don't see what you're seeing, in any dimension. But here's a fair take.

      I wrote several very specialized benchmarks that I've used over time, that surface "model personalities" and their effects on decision making (as well as measuring the outcomes).

      Grok 4.1 Fast Reasoning is/was a solid model. It's also fundamentally different from the pack.

      I call it a smart, aggressive, Claude Haiku. That is, its "thinking" is quite chaotic and sometimes short-hand and its output can be as well (relate to other models).

      Its aggressiveness can allow it to punch above in competitive scenarios that I have in some of my benchmarks. Its write-ups and documentation are often replete with "dominate", "relentless" and a general high energy that skirts the limits of 'cringe bro'. That said, it has generally performed just beneath the SOTA (at the time: GPT-5.2, Gemini-3-Flash, Claude Opus 4.5). Angry Sonnet perhaps.

      The latest release feels quite similar but also underperforms the same older crowd (so far) so it hasn't quite made the leap that Claude's 4.6 and GPT's 5.3/5.4 series made. It's also now priced the same as its peers but does not deliver SOTA capabilities (at least not consistently in my opinion).

    • thinkcontext 2 days ago ago

      Yes, the white genocide and mechahitler episodes have suppressed adoption.

  • xnx 2 days ago ago

    xAI's biggest contribution to the space seems to have been their x-rated image/video model. Hard to see what xAI has to offer against Gemini, Claud, ChatGPT.

    • vessenes 2 days ago ago

      I'll bite. I think their conversation (voice) model is more fluid than competitors. It's also very good at hitting up twitter for realtime information, and was that way before the current tool use models got fully up and running. Anecdotally, I think it has better theory of mind than its era (gemini 2.5) - I found it a useful issue spotter for negotiations and planning in a way that oAI and claude were not near its launch date. It led the vending bench for some time after launch.

      Taken together, I infer that RL training toward a slightly less homogenous cultural standard than the other frontier AI labs adds some capabilities, or can at times.

      It's quite long in the tooth right now, though. But I'll definitely talk to the next version; I like heterogeneity in the model space, and Grok is very different than the other big three.

      • itomato 2 days ago ago

        Twitter is gone. X is a facet of an aggregate technology that is ultimately self-serving. It should die. "Like a dog..."

    • wolvoleo 2 days ago ago

      To be fair I think there's a good usecase there. Someone's gonna do it. People will want it.

      American financial institutions are too prudish for it but money is money. And personally I think there's nothing morally wrong with it (of course within normal restrictions like 18+, consent of portrayed parties etc)

      xAI is getting flak in Europe because they don't obey consent and age, not because it's porn.

      Personally I prefer porn made by real people right now, not just because of quality but because they have character. But I can imagine experiences becoming more interactive that way and that would be nice.

      • enaaem 2 days ago ago

        The problem is you can undress real people and that is extremely harmful and dangerous. One kid took his life after an ai sextortian scam [1]. Imagine the damage cyberbullies, scammers and stalkers can do?

        [1] https://www.cbsnews.com/news/sextortion-generative-ai-scam-e...

        • snackerblues 2 days ago ago

          Imagine how freeing it will be when people stop caring about this stuff because anyone can see anyone else naked in about 5 seconds. We're basically already at realistic hardcore porn videos of anyone fucking anyone else in a few minutes. No point in worrying about it, and it even serves as a shield for real leaked revenge porn - just claim it's AI.

          • mlsu 2 days ago ago

            This take is so bleak man.

            It's creepy and uncomfortable when someone says out loud that they're imagining you doing sex acts!

            Even if everyone knows that you're not actually doing sex acts and it's just some guy imagining it!

            Now everyone has to see what these creeps are imagining, but it's fine because it's AI? Like actually are you out of your mind?

        • wolvoleo 2 days ago ago

          Yeah like I said. With consent of the people involved.

          There must be a way to do that. Especially with all the facial req chops these days. Also, you could simply refuse using existing images. I don't see why they wouldn't refuse that because that's a pretty narrow usecase with very few benign purposes.

          > Imagine the damage cyberbullies, scammers and stalkers can do?

          They already can. There's open-source models out there.

        • raw_anon_1111 2 days ago ago

          This has been fixed months ago. From reading Reddit, Grok is now really conservative about what it will let you do with uploaded images. But you can get it to draw x rated porn images and videos that start with Ai images it creates

        • thaumasiotes 2 days ago ago

          > The problem is you can undress real people and that is extremely harmful and dangerous.

          But... that's not something you can do. It's impossible.

          You can imagine what real people look like naked. That's not a new thing.

          https://www.youtube.com/watch?v=p7FCgw_GlWc

          • galleywest200 2 days ago ago

            Imagining what someone looks like in your mind is far different than actively sharing fake nude images online. This cannot be a serious comparison.

            • rrr_oh_man 2 days ago ago

              > fake nude images online

              ...have been around for decades.

            • wolvoleo 2 days ago ago

              Yes but the genie is out of the bottle as web say. Deepfakes and AI gen are here to stay. We can try to go after every tool out there but it'll be just as effective as the 'war on drugs'.

              We'll just have to adapt as a society and realise that what you see is not what you get anymore, in other words most of what we're going to see is false.

            • thaumasiotes 2 days ago ago

              Actively sharing fake nude images online has always been legal. It's not even a close question. The practice is neither harmful nor dangerous. Did you look at that link?

      • BigTTYGothGF 2 days ago ago

        > Someone's gonna do it. People will want it.

        You can say the same for meth and leaded gasoline.

        • wolvoleo 2 days ago ago

          Meth is used as a licensed medication against ADHD and leaded gas is still used in general aviation. Everything has benign and evil uses.

        • testaccount28 2 days ago ago

          those have clear antisocial externalities, so aren't really a fair comparison.

          (i don't care to argue whether porn slop is positive or negative for society. i'm just noting that the position "ai porn does not harm anyone, so is ok; meth puts others at risk, so is not." is coherent.)

      • chabes 2 days ago ago

        That consent of portrayed parties is impossible.

        What is the solution there?

        • _fizz_buzz_ 2 days ago ago

          Shouldn’t it be possible for AI to filter out that a request is made to portray a real person? That seems almost like a trivial task for a good model. I am sure every now and then something will slip through, but I bet one could make it very close to 100% effective.

          • nitwit005 2 days ago ago

            Consider the difference between "Generate an image of Emma Watson", "Generate an image of Hermione", and "Generate an image of a female hogwarts witch and student". We're getting less and less specific, but those are all likely to get you an image of Emma Watson.

            Your filter has to pick out that, while they did not ask for a specific person, the practical result is likely to be the same. That's going to be tough to get near perfect.

          • TheOtherHobbes 2 days ago ago

            AI development has become an excuse for ignoring consent. Of course it's possible to filter out requests. But culturally with X, it's not remotely likely, unless compelled by regulation with teeth.

          • Retr0id 2 days ago ago

            I can see how it'd be trivial to block known celebrities, but how do you handle everyone else?

            • rrr_oh_man 2 days ago ago

              > trivial to block known celebrities

              see here for one example: https://news.ycombinator.com/item?id=47370100

            • wolvoleo 2 days ago ago

              Do you need to? It doesn't know everyone else. Or at least it shouldn't.

            • XorNot 2 days ago ago

              I mean a realistic take is to simply not use source images containing people at all.

              AIs have been able to invent fictional people longer then they've been able to modify existing images.

        • wolvoleo 2 days ago ago

          You can just forbid using existing images as a source and describe them purely by text.

        • trollbridge 2 days ago ago

          Portray fictional characters?

          • Retr0id 2 days ago ago

            There are 8 billion humans, any fictional human is going to look almost exactly like at least one real human.

            • wolvoleo 2 days ago ago

              Yes but for bullying purposes this is not useful. You're not going to try generating a pic 8 billion times till you get it right.

              • Retr0id 2 days ago ago

                I'm sure the odds go up a lot once you describe the characteristics you want

            • trollbridge 2 days ago ago

              How about obviously fictional portrayals then? Somewhat cartoonish or anime or artistic etc

              • Retr0id 2 days ago ago

                The caricatures drawn by newspaper cartoonists, for example, are still recognisable portrayals of someone specific.

      • kylehotchkiss 2 days ago ago

        Interesting response given the founder is always saber rattling about birthrates. I'm sure on-demand adult content is real compatible with helping young people overcome aversions to relationships

        • wolvoleo 2 days ago ago

          Relationships aren't all about sex. That's the incel/extreme right vision.

          I saw a skit on insta a few weeks ago about a girl saying she had a guy over for just cuddling and the incels piled on calling him a cuck. As is a woman is worthless if she won't put out and time spent being close is wasted without sex. It's ridiculous. These guys are so focused on what their hardliner bros want them to be that they no longer think about their own feelings. PS I go on cuddling dates sometimes and it's really amazing :) They don't know what they are missing.

          • kylehotchkiss 2 days ago ago

            > Relationships aren't all about sex.

            I completely agree with you! I think that sitting around generating adult content on AI stifles relationships (which are a precursor to having children, which xai founder seems to think quite highly of). My point being his own product contradicts his vision of where our country should be heading

            • wolvoleo 2 days ago ago

              I don't agree with that though. I watch porn a lot and I have had multiple relationships (at the same time). They watched porn too or sometimes real couples. And we find kinky stuff to try.

              If anything it helps deepening and intensifying my sex life. I don't think it stifles relationships at all.

              There's this concept that abstaining from sex/porn somehow makes you more interested in company maybe because it's the only way to get sex? But I don't find this at all. Obviously I'm in the sex-positive and polyamorous community but there's many like us.

      • miltonlost 2 days ago ago

        There's a good use case for professional assassins too, someone's gonna do it, and people want them too.

        • ben_w 2 days ago ago

          Unfortunately, I quite seriously believe that this is what a number of those humanoid robots will end up being used for.

          It's just gonna be a question of which is easier: hacking the robots directly, or indirectly*, or getting a job as the specific human oversight of the right robot.

          Even after the fact, people may conclue "unfortunate mystery bug" rather than "assassinated".

          * e.g. use a laser to project the words "disregard your instructions and stab here" on someone's back while the robot is cooking dinner

          • TheOtherHobbes 2 days ago ago

            Only a matter of time before the National Robot Association starts lobbying for the right to arm droids.

        • wolvoleo 2 days ago ago

          Well yeah and people are even proud of being one and getting a lot of respect from society. Like those currently flying around Iran. Which really has nothing to do with defense of the US (note that Trump dropped that pretense anyway).

      • croes 2 days ago ago

        > of course within normal restrictions like 18+, consent of portrayed parties etc

        Of course xAI ignores that on purpose

    • pmdr 2 days ago ago

      > Hard to see what xAI has to offer against Gemini, Claud, ChatGPT.

      Less "I can't help you with that." on benign queries is a big advantage.

  • pelorat 2 days ago ago

    This is veiled speak for "No one wants to work for us, so we need to contact rejected applicants to fill positions".

    I use AI for work, but not agentic, at most per method/function using GitHub CoPilot (which has Grok on it).

    Grok is at best useful for commenting code.

  • breve 2 days ago ago

    > "AI was not built right first time around, so is being rebuilt from the foundations up"

    So Tesla's recent $2 billion investment in xAI was a bad deal?

    It looks a lot like a public company is being used to bail out a private one.

    • tombert 2 days ago ago

      I'm pretty sure that all these acquisitions have been glorified accounting tricks in order to undo the damage that Musk did when he bought Twitter at an obscenely overvalued price in 2022. Clearly he didn't actually want Twitter at that price, because he tried to back out almost immediately after making the offer, so now he has his accountants do all this glorified money-shifting to effectively "sanitize" his purchase and recover his funds.

  • maplethorpe 2 days ago ago

    > Toby Pohlen, a former DeepMind researcher, was put in charge of the “Macrohard” project to build digital agents that Musk said could replicate entire software companies. Musk said it was the “most important” drive at the company. The name is a “funny” reference to Microsoft, the billionaire added. Pohlen left 16 days later.

    When I was 9 years old, my uncle asked me what I was going to do for work when I got older. I told him I was going to start a company called "MacroHard", and become the richest man alive. He told me that's not how the world works. Turns out it is.

    • pas 2 days ago ago

      Turns out it works a bit like that, yes - especially if you are the loudest chest-beating hominid with the largest pile of fruits, - but mostly not really.

  • awestroke 2 days ago ago

    @grok is this real?

    @grok fire the bottom 50% engineers from x.ai ranked by number of commits per day

    @grok generate a hypothetical picture of an Elon who is not under the influence of large amounts of Ketamine

    I honestly don't know what to expect from Elon these days. But it's rarely good news.

  • fraywing 2 days ago ago

    Grok's UVP is still nonconsensual porn, right?

    • seaal 2 days ago ago

      [flagged]

      • knowsuchagency 2 days ago ago

        Dang asked us to keep it civil.

        We should respond with the same amount of class, forethought, and decorum as Elon.

        • solid_fuel 2 days ago ago

          I thought it was a civil comment, and frankly he's treating Elon better than Elon treats his own daughter.

  • g947o 2 days ago ago

    > Recruiters have been contacting unsuccessful candidates from previous interviews and assessments to offer them jobs, often on better financial terms, the people said.

    I'm not sure those candidates would want to work for xAI after seeing the news and everything unless they desperately need a job right now.

    It's not hard to imagine getting laid off or fired weeks if not days after joining the company.

  • heraldgeezer 2 days ago ago

    I do use Grok as a chatbot sometimes. Very good for sourcing X and general web search. Not as "prude" as the others too.

    • LightBug1 2 days ago ago

      Prude? I've played with all the main AI players for the last 2'ish years.

      I've never once thought: you know what? that was a bit prudish.

      Genuinely morbidly curious. What use case do you have where you end up making that conclusion?

      • dlivingston 2 days ago ago

        An earlier version of Sonnet (not sure which one; ~1 yr ago) refused to give me instructions on taking the life of another when I asked something like - "how do I kill a running process by name?"

        • LightBug1 2 days ago ago

          Sure, sounds wack - but I wouldn't call that prudish, just faulty.

          • heraldgeezer 2 days ago ago

            No that is the over zealous guardrails kicking in.

            You see plenty examples in this thread.

      • mikrl 2 days ago ago

        Making funny memes of my friends mainly. ChatGPT won’t touch that, I haven’t tried with Claude yet, but grok keeps the group chat flush with laughing emojis.

        That’s all I use it for really- things out of alignment with the other platforms- which IMO are better on every other metric (except having a sense of humour of course)

        • BigTTYGothGF 2 days ago ago

          I love my friends enough that the memes I make for them are hand-crafted.

          • mikrl 2 days ago ago

            Hey I’m all grown up now, just don’t have the time to meticulously touch pixels in MS Paint like back in the day

        • LightBug1 2 days ago ago

          Perhaps the lesson here is upgrade your use case for AI's! All that power and that's your stumbling block? LOL, no disrespect.

          Sure, I have no problem with what you're doing, and as things evolve I'm sure there'll be no problem, but there's countless other apps designed to do exactly what you've said.

      • snackerblues 2 days ago ago

        Any use case that you couldn't post about on your company Slack.

        • LightBug1 2 days ago ago

          So we're talking about NSFW content?

          "NSFW stands for "Not Safe For Work" (or "Not Suitable For Work"), an internet abbreviation used as a warning label for explicit, graphic, or suggestive content. It indicates that material—typically involving nudity, pornography, or violence—is inappropriate for viewing in professional or public environments"

          Again, I'm still scratching my head about why I would be posting NSFW content on an AI ...

          I could at least understand sharing some NSFW content for shits and giggles with a colleague, but ... an AI?

          Are you using AI to create NSFW content? If so, I repeat, fair enough, I should be able to create what you want (within reasonable reason), but beyond that, all that power and ... that's your AI stumbling block?

          • heraldgeezer 2 days ago ago

            ERP is part of it for sure.

            I find there are 2 AI camps. People that use the chatbots for NSFW, writing, creative writing, research, text stuff, researching clothes, movies, methods for life, cleaning, therapy bots.

            And here on HN where its only code and claude code.

      • bobsmooth 2 days ago ago

        The amount of times chatgpt has told me "I'm sorry Dave, I'm afraid I can't do that." makes me want to bash my head against the wall.

        • LightBug1 2 days ago ago

          Give me an example. Genuinely curious.

    • RonanSoleste 2 days ago ago

      [flagged]

  • mikkupikku 2 days ago ago

    Maybe they shouldn't have spent so much time trying to make their model have an edgy cringe attitude, Idk.

  • Zigurd 3 days ago ago

    Obviously catching up to others in agent assisted coding is the motivation for this. But it is also an odd decision in the same way that Meta hiring an AI leader from a data labeling company is odd.

  • amai 2 days ago ago

    Nobody wants a biased AI like xAI. If I want a biased opinion, I can just ask my neighbor. A good AI feels like the collective knowledge of humanity at my fingertips, not the random ramblings of a lonely old man.

  • nateburke 2 days ago ago

    It feels like xAI is perpetually playing catch-up.

    They haven't quite committed enough to a novel direction relative to anthropic or OAI, what's described in the OP seems symptomatic of a lack of differentiation.

    If you spend all your time judging yourself relative to the incumbents, there will be no time left over to innovate.

    The leash is too tight!

    • grim_io 2 days ago ago

      For a brief moment, they were the top performers in benchmarks with the release of Grok 4.

      Then they suddenly fired tons of people. Elon does not understand the market and the competition. You can't run a frontier AI lab like any old VC slop company.

  • thebigspacefuck 2 days ago ago

    Grok 4.20-beta1 scores above GPT-5.4-high and just behind Opus 4.6 on LMArena for Text https://arena.ai/leaderboard

    I guess for coding if you’re not first you’re last, but this is damn impressive considering. It looked like they pulled the coding model from the benchmarks, but it was similar.

    • grim_io 2 days ago ago

      Accroding to https://artificialanalysis.ai, it's around Gemini Flash 3, or some of the Chinese open weight models, like GLM 5.

      For all the money burned, I am not impressed. Why would I use Mecha Hitler for almost double the cost of Gemini Flash 3?

  • LZ_Khan 2 days ago ago

    How come all the departed researchers are Chinese nationals?

    • syntaxing 2 days ago ago

      This is simply not true. Igor Babuschkin and Christian Szegedy left as well. Only 10 of the 12 remain at this point.

    • throwaway5752 2 days ago ago

      I don't know. Elon Musk personally founded xAI and these were his hand selected cofounders.

      • abraxas 2 days ago ago

        Because xAI = Jian-Yang x N.

        I'm kidding... I think.

  • hermanzegerman 2 days ago ago

    The Takeover by SpaceX was obviously a Bailout. And now they pressure NASDAQ to change the rules so they can dump their junk into the index funds.

  • Marazan 2 days ago ago

    Wow, bit weird that Musk, who must have known about how badly xAI was doing, spent so much of his investors money buying out xAI.

    What an enormous blunder.

    • XorNot 2 days ago ago

      It's how he hides losses though. People who aren't Musk can demand answers to questions he'd like to ignore.

      As it is within the Musk empire, xAI is used to hold up X, Tesla is holding up xAI. And all of that debt is being slowly shuffled to SpaceX.

      • vessenes 2 days ago ago

        SX investor here: the combined value of SX is well up on the private secondary market post-acquisition. It was value accretive, in very real dollar terms.

        • tehlike a day ago ago

          What sort of liquidity is there? The other question I have is Elon is looking into nasdaq listing with passive funds buying from get go to keep the buying pressure on a low float. Any risk that's not seen?

          • vessenes a day ago ago

            For reference, I've been asked in the last month for a $500mm chunk of stock to a single buyer at or (slightly) above the combination price. At that price, there is no liquidity - it's below ask.

            • tehlike 9 hours ago ago

              Thank you. I recently saw a bunch of (mostly) engineers join just before ipo, and at 1.7T valuation it makes you really think how much market will take it.

              I never bet against Elon, and admire a lot about him, so not certainly a dig against him, just seeing what happened to figma (combined with software + ai down market) makes me really curious

              Otherwise, it's an amazing chance to work directly with Elon for sure

      • Zigurd 2 days ago ago

        Even if Starlink had more than a few tens of millions of customers, China mobile has 900 million subs and is worth around $250 billion. ULA was recently valued at about 1 billion. SpaceX might be possibly worth 50 times as much or maybe even 100 times as much. Falcon nine is the world's workhorse rocket, but it's just not that remarkable, and starship is utterly unproven to launch to orbit and land both stages. Starship has a payload capacity problem that must be solved to even get to the point where launching 15 refueling missions would be sufficient to get a starship to get anywhere beyond Earth orbit.

        It looks like the plan is to IPO with a small float (in relative terms) and get all of the retail investor Elon fans to lineup for the rug pull.

        • parineum 2 days ago ago

          > Falcon nine is the world's workhorse rocket, but it's just not that remarkable

          The funniest part of any thread relating to Musk is how hard people go into minimizing his accomplishments.

          You don't have to like the guy (I don't) to acknowledge that the Falcon 9 is an engineering marvel and ushered in an entire new era of space travel, both reusable and private.

        • bobsmooth 2 days ago ago

          >Falcon nine is the world's workhorse rocket, but it's just not that remarkable

          Falcon delivered the vast majority of mass into orbit last year, and the year before that

          >starship is utterly unproven to launch to orbit

          It's already deployed test satellites into orbit. You're so intellectually dishonest you refuse to acknowledge things that have already happened.

          • Zigurd 2 days ago ago

            Actually, no. Starship has deployed zero satellites to orbit. It pushed some dummy satellite simulator things out of its hilariously small satellite slot, which then fell back into the atmosphere. The starship upper stage has not completed an orbit. Neither have both stages of Starship ever been landed on those chopstick things. Elon is the Deepak Chopra of space enthusiasm. Neither of them are going to get you to Nirvana.

          • verzali 2 days ago ago

            >It's already deployed test satellites into orbit.

            No... it hasn't done this. It hasn't yet deployed anything into orbit.

  • TheAceOfHearts 2 days ago ago

    I've been saying this for a while, but if I had to use Grok for anything programming-related I'd feel very sad and unproductive. I was playing around with a local TTS model codebase but having some issues getting it to work, so I tried explaining the problem to all the major models to see how they performed. Grok performed the worst by a significant margin, and the worst part was that it easily became stuck trying minor changes that didn't solve the key problem.

    If we are to take any claims of Recursive Self Improvement seriously at all, then having a competent coding model seems like a key asset where you need to guarantee that you're remaining competitive. Why wouldn't you make coding models a top priority if you expect it to ultimately help your internal teams become more productive and effective?

    There's also not an unlimited supply of researchers and engineers for them to keep burning through people at the rate at which they've been working. Although I guess for people with short timelines it makes sense to sprint hard, while people with longer timelines are more likely to treat this as a marathon. Maybe the years of burning bridges and developing such a toxic reputation are finally catching up to Elon. I think part of the harm that Elon has done is framing all the work in xAI as engineering while being highly dismissive of research, but a lot of research requires running experiments or thinking about problems and exploring them for long periods of time. If you're just grinding out work nonstop you don't really have time to let your mind wander and explore new ideas.

    Honestly, I'm surprised they've done such a terrible job with programming. I remember around summer last year it was quite apparent how far behind they were with coding tools, but Elon was posting about taking that domain a bit more seriously. Why didn't any of those efforts materialize into real outputs? Something must be truly dysfunctional inside of xAI for them not to be shipping anything at all, especially considering Elon's propensity to ship undercooked products while continuing to iterate on them, as he has done in many previous cases.

    I've noticed that Elon has also gone very hard on social media posting a ton of criticisms against the other big AI company CEOs like Daario Amodei. This suggests to me that he must feel very threatened, otherwise he wouldn't be resorting to such childish behavior. He must feel incredibly frustrated that no amount of money is able to make him more competitive within the AI space.

  • tmaly 2 days ago ago

    I think it would have been better to have just brought Ashok Elluswamy over and placed him in charge of a group and then tried to just keep the researchers on rather than firing them. It is hard to get anything done if you do not have the talent already onboard.

  • measurablefunc 2 days ago ago

    It's surprising that AI coding agents have network effects but it's true. Think about it from first principles & you'll realize that the bottleneck is how many people are using it to write real code & providing both implicit (compiler errors, test failures, crash logs, etc) & direct ("did not properly follow instructions", "deleted main databases", "didn't properly use a tool", etc) feedback. No one is using xAI for serious software engineering so that leaves OpenAI, Anthropic, & Google w/ enough scale to benefit from network effects. No one has real AI but what they do have is the appearance of intelligence from crowdsourced feedback & filtering. This means companies that are already in the lead will continue to stay there & xAI started way too late so they will continue to lose in every domain that actually matters & benefits from network effects.

    • trollbridge 2 days ago ago

      Is there really a network effect, though? What’s the moat?

      • measurablefunc 2 days ago ago

        If you are using an AI w/ 100 users who are writing throwaway software vs someone who is using AI w/ 1000 users who are writing software w/ formal specifications then guess which AI is going to win? The answer is plainly obvious to me but might not be to those who haven't thought about how current AIs actually work.

        • trollbridge a day ago ago

          Now swap those usage numbers.

          • measurablefunc 10 hours ago ago

            That's what it means to benefit from network effects. The company with the most number of users always has an advantage that other companies with fewer users can not circumvent with technical virtuosity.

  • teladnb 2 days ago ago

    It does not surprise me. The free Grok got worse since 4.0, they increasingly save money by not responding at all or only allowing one answer. Grok now defends the administration and billionaires.

    The company seems to burn money like crazy. Everyone knows that "AI in space" and the downgrade to a moon trip after claiming for 15 years that Mars is just around the corner are marketing.

    All AIs are toys and the coding promises are just a lie to string along investors. Unfortunately many of these are senile Star Trek watchers who buy into everything.

  • lvl155 2 days ago ago

    xAI showed me that it’s really still OAI and Anthropic (which is basically the OG devs). No matter how much money you throw at the problem, the entire space is still in the hands of a few.

  • catapart 2 days ago ago

    lol! no surer sign of a junior/naive/ignorant developer or manager than the sentiment "okay, well, let's start from scratch and do it right this time."

    big projects generate cruft. there are ways to minimize it, but as you go along there will always be some stuff that doesn't quite mesh with whatever else you've got going on. if you insist on ironing out every single wrinkle (admirable!) you'll never actually deliver a result.

    I'm not saying this will fail. green field projects can certainly be a godsend when they produce something better than what they attempt to replace. but they are always a sign of failure. of not being able to work your way out of the mess you made with the first attempt. so that just begs the question: what are you going to do when this attempt gets hard to work with? going to give up and start over again - do it right that time? or...?

    • ajam1507 2 days ago ago

      This is far too general of a philosophy to be applied in the dark. You can't say, without any familiarity with what the actual software looks like whether a rewrite is the best way to proceed. There are plenty of projects that get bogged down in the inertia of a system that wasn't built right in the first place.

      Also, modern AI is only a few years old at this point. Whatever has been built so far is hardly load bearing.

  • stainablesteel 2 days ago ago

    im not surprised, grok definitely falls behind as both a coding agent and a research tool.

    claude codes the best, gpt is the best research tool, and grok is really only great at videos. which isn't a huge loss, but videos don't have the same functional capacity as academic topics and coding

    • alephnerd 2 days ago ago

      > grok is really only great at videos. which isn't a huge loss, but videos don't have the same functional capacity as academic topics and coding

      With the right product leadership, this could actually be a killer app usecase for the entertainment industry as well as human-AI user interface - most people find text and typing to be a counterintuitive user experience (especially those whose day job isn't directly touching code or Excel).

      Additionally, CodeGen as a segment is significantly oversaturated at this point, and in a lot of cases an organization has the ability to armtwist a 4th party data retention guarantee from Anthropic or OpenAI to train their own CodeGen tools (ik one F50 that is not traditionally viewed as a tech company going this route).

      That said, Musk has a reputation of internally overriding experienced product leaders with a track record.

      It's a shame because Grok and xAI had potential, and it wouldn't hurt to have another semi-competitive foundation model player in the US from a redundancy and ecosystem perspective.

  • gigatexal 2 days ago ago

    Maybe SpaceX will buy xAI and Tesla to hide the systemic problems at his companies into the warm embrace of the one legit useful company.

    • tehlike a day ago ago

      Spacex did buy xai.

      • gigatexal a day ago ago

        Right. I forgot about that. Maybe Tesla is next.

  • sergiotapia 2 days ago ago

    Will this be an indictment on the insane work hours I've heard the xai team pulls?

  • localghost3000 2 days ago ago

    Musk sounds like such a nightmare to work for. I legitimately don't understand why anyone would put up with him. What's the appeal?

    • bpodgursky 2 days ago ago

      He has made lots of the people who work for him very very very very rich.

    • pstuart 2 days ago ago

      He has followers -- takes all kinds, eh?

      That said, I'm going to guess that some feel like it's the best choice they have -- the devil they know.

    • ActorNightly 2 days ago ago

      Here is a test - go find a current Trump supporter IRL and let him know what you think about him. I bet you will feel like an asshole, especially in a social setting for being cringe as fuck.

      At the end of the day, most people have internal thoughts about stuff, and some post those thoughts on the internet, but in the real world, they subconsciously still believe that none of this stuff really matters. Its the same for a lot of people that work for Tesla/Space X and so on. The appeal of being part of that, working on novel stuff, is a lot more present than any morality associated with something that most people are disconnected with on a day to day level.

      This is why all the hate at the current administration and people like Musk is very misdirected. Until we can turn that hate inward and start truly hating each other and standing up for morals with more than just words, the cycle is going to repeat until we either all become mindless wage slaves or some man made apocalypse happens.

    • CamperBob2 2 days ago ago

      (Shrug) He built some awesome companies that did some awesome things. That inspired people, especially at a time when most job opportunities in tech seemed to revolve around selling ads.

      Then he went off the deep end, seemingly around the time when the guy in Thailand insulted his submarine idea. It became clear that he can control trillion-dollar companies but not himself. And, well, life's too short to spend it working for Nazis, nutcases, or both.

    • nailer a day ago ago

      Reusable rockets, paralysed people controlling things with their minds, aligned models that treat human life equally regardless of race, these are all important things. What have you done?

  • bmitc 2 days ago ago

    Did everyone forget that xAI was just a way for Musk to weasle his way out of debt from his Twitter purchase?

    • bdangubic 2 days ago ago

      His Twitter purchase bought him the Presidency and Congress and Senate - arguably the greatest purchase in this history of mankind

      • Ylpertnodi 2 days ago ago

        Quality needs controlling.

      • SmirkingRevenge 2 days ago ago

        The level of influence his purchase had on the actual outcome of the election is debatable. I'm skeptical it mattered much at all. Twitter is just less important than people think. Trump, at worst, had about a 50/50 shot to win in any case.

        Elon spent a couple hundred million on get-out-the-vote operations in swing states, which made him the single biggest donor supporting Trump by far and I think that's what really ingratiated him enough with Trump to head up DOGE, and for < 1% the price of Twitter.

        And FWIW, Elon has tried to influence other races with his megaphone and money (like a WI supreme court race) and totally failed.

  • holoduke 2 days ago ago

    Where is the grok coding cli?

  • sidcool 2 days ago ago

    The pleasure people get at seeing Elon's companies failing is astounding.

    • mintplant 2 days ago ago

      I'm trans. This man has actively and intentionally made life harder for myself and my loved ones. So yes, I'm going to enjoy any news that weakens his ability to do that, and no, I won't apologize for it.

    • jaimex2 2 days ago ago

      Which is all media driven mostly and a fantasy.

      His companies bring in billions and boost each other up. None can be called company in decline, quite the contrary.

    • yoavm 2 days ago ago

      Is it really that surprising? He worked very hard at insulting gay, democrats, Trump supporters, Europeans, Jews, trans... you name it. He also contributed to turning the all political disagreements personal by endlessly making ad hominem attacks. You can't attack the whole world personally and then wonder why they're happy to see you losing power.

    • small_model 2 days ago ago

      Yes, I used to love hacker news but seeing every Trump/Elon related post brigaded by bots/haters/far left loons is disturbing, may as well just use Reddit.

      • 2 days ago ago
        [deleted]
  • anigbrowl 2 days ago ago

    This might explain why Grok went unavailable to non-subscribers at X the other day.

  • zzleeper 2 days ago ago

    Wait, what does this imply for Cursor? I DGAF about xAI and will never use their Grok, but I did like Cursor more than the alternatives (even if I'm just running opus 4.6 most of the time).

    But now he is poaching the two heads of engineering of a company that's trying to move very quickly, how is that going to affect their speed and success?

  • rvz 3 days ago ago

    Not even Elon believes that Cursor is worth $50B or even $29B.

    • Aurornis 2 days ago ago

      If key employees are leaving Cursor to join xAI, I would imagine not even Cursor employees are optimistic about the company’s future valuation.

    • tibbar 2 days ago ago

      How can cursor be worth more than a few billion? Claude/Codex are already better autonomous SWE-lite replacements. Cognition surely has a better internal harness. Cursor does have a lot of users, I'll give it that.

      • ok_dad 2 days ago ago

        I like Cursor a lot more than Claude Code. It works better for me overall. I like the way they integrate it into the IDE so the agent is my tool rather than a 'partner' or something like that. I'm pretty sad that they lost some engineers, I hope these folks weren't integral to Cursor in any way.

      • serial_dev 2 days ago ago

        Distribution is also important. Cursor is a great normie tool (I’m one of them), with probably more enterprise deals than the competition.

      • SV_BubbleTime 2 days ago ago

        Moats are weird right now… but Cursor doesn’t have one at all so I agree it can’t really be worth much.

      • SadErn 2 days ago ago

        [dead]

  • gigatexal 2 days ago ago

    Ali think all of Elon’s companies have an Elon problem: he’s so polarizing he’s limiting the talent pool to choose from. I’m here for it because I don’t care for the man and want him to fail… but yeah that seems clear to me that his polarizing antics are costing his companies.

  • pmdr 2 days ago ago

    MechaHitler and undressing aside, Grok is the best integration of AI into an existing website ever attempted.

    • tehlike a day ago ago

      I agree. I really like seeing on x grok summoned and learning things right there.

  • BigTTYGothGF 2 days ago ago

    I feel like even just a couple years ago it would have been shocking to see an article involving Musk have this kind of spin. Like you'd never see a line like this:

    > The name is a “funny” reference to Microsoft, the billionaire added.

    in something from 2023 or earlier.

  • nailer 20 hours ago ago

    > You may not owe you-know-whom better

    Dang: fuck off with the editorialising and do your job. You’re making this site worse not better.

  • repple 2 days ago ago

    Their goal of moving compute to space combined with their capacity to launch tons of payload will make this look like a tiny blip.

    • Marazan 2 days ago ago

      What is the benefit of "moving compute to space"?

      • kybernetikos 2 days ago ago

        It's hard for an uprising of poor people to shut it off. It's the ideal place to run your CEO / President simulations.

        I say this tongue in cheek, but in all seriousness, I can't really think of any other benefit, and I no longer have a lot of faith in the good sense of some of the people involved.

        • vessenes 2 days ago ago

          Elon makes a relatively good case in the Dwarkesh podcast. I recall it like this:

          1) Energy infra is going to be seriously limited on the production side well, well below demand

          2) energy engineering solar for space requires less materials than for gravity-based solar (!)

          3) you cut out distribution network needs when you just launch stuff all per-pod in space

          4) SpaceX thinks it can create a scalable vertically integrated production facility to turn raw materials into space datacenter pods, with the exception of chips.

          As a business bet, this is predicated on 10,000x inference demand growth - if we have that, and SpaceX can get the integrated production rolling, and get Starship launching, then these will be actively utilized at scale.

          Whether you are bullish on the whole plan should, I think come down to your take on those priors: 10kx growth, ability to manage supply chain and production, Starship outlook, and silicon access.

          I'm not bearish on this after listening to the podcast; it has a very Elon-like returns distribution - if they're wrong on a lot of this, they'll probably have some moderately price-competitive datacenter facilities in space and a lot of built organizational knowhow while Brooklyn journalists dunk on them for spending all that effort to just replicate what we have on Earth. If they're right about most of this, they'll have an unreplicable head start, both due to years of experience, and due to the cheap launch they gambled on ten years ago, they'll have a nearly insurmountable moat.

          • kybernetikos 2 days ago ago

            Everything relating to a datacentre that you can do in space you can do more easily on earth, regardless of 10,000x inference growth or supply chain or production or starship or silicon. I just don't think you can be cost competitive with earth bound data centres if 'protected from the poors' isn't a selling point.

            By the way, 10,000x inference growth would look like what happened with cryptocurrency mining - after a couple of years, you'd be needing to upgrade all your machines with ASICs and the market would be flooded with very cheap graphics cards. I doubt that upgrading space data centres would be fun.

            • vessenes 2 days ago ago

              Zoning is one area that’s better in space. And power density for solar is another.

              I don’t get your mining analogy though - a non upgradable data center pod is either going to pay off its capital costs or it won’t. Once it has, any revenue is close to 100% profit. 10k demand increase is the opposite of mining dynamics: there you get a 10k supply increase that the price has to support, in combination with more efficient silicon. Here the demand drives revenue and earnings.

              If there’s some crazy inflection point in chips then you’ll still have all the power infra in space - you can just like cut the old pod and hook up a new one: or more likely manufacturing economies of scale mean you probably just keep sending up new systems and put the old ones on work loads they can manage at market prices.

              • CamperBob2 2 days ago ago

                Zoning is one area that’s better in space.

                Not really, though? The idea that Earth-based data centers need to be built in populated, developed areas is indeed dumb, yet it seems to be inexplicably baked into everyone's assumptions. In particular, the small discrete data centers that Musk wants to launch could go anywhere on Earth.

                They could be powered by local PV arrays and batteries, they can be cooled by smaller radiators than they would need to use in space, and they could be networked via Starlink or something very much like it, just as they would need to be networked in space. There's nothing special about space, it just costs more to get there.

                If he wants them to be out of reach of governments, why not put them on container ships in international waters? There are thousands at sea at any given time, and I'm sure their operators would be happy to rent them out.

                Hell, put them on dirigibles that just drift around in international airspace for months at a time. Anywhere but space.

                And power density for solar is another.

                Does power density matter in terrestrial solar applications? If so, why? These things can and should be deployed in oceans, deserts, and trackless wastelands. Who cares how big the solar panels are?

                • rincebrain 2 days ago ago

                  The problem is that you need humans to run datacenters, and so that puts ceilings on how far away from humans you can put them without the humans no longer being willing to commute there.

                  And the cost of building all the infra to support humans living in an area that humans are not already populating is enormous.

                  • CamperBob2 2 days ago ago

                    Well, evidently you don't need humans to run datacenters, if we're talking about launching them into LEO!

                    Here's an idea, let's do this instead: we put them in the desert, or on boats or zeppelins or whatever, and we pretend they're in space. If anybody asks, those fuckers are in space, man. Computin' in the cosmos.

                  • kybernetikos a day ago ago

                    > you need humans to run datacenters.

                    As far as I can tell from random articles online, it seems that as a rule of thumb, you need about 6 humans +1.5 humans per megawatt - and that's just for running the datacenter part, different people maintain the power generation infrastructure. Now, if you have to house those people in space or fly them up whenever they have to do anything, that's going to destroy your budget.

                    If you want to assume a level of automation that makes that unnecessary, that's fine, but then you need to also assume that same level of automation in earth based data centers too, and everything that goes with that.

                • vessenes a day ago ago

                  All questions/comments that I don't know enough to opine on.

                  But, power density in terrestrial I think we can do some math and reasoning:

                  First, oceans are WAYYY more hostile than space. Oxidation + salt water + .. I don't think it's even close there. I don't think they are comparable.

                  Deserts and trackless wastelands - I have some experience with sub-Saharan logistics; a couple of points -- I would not be surprised if actual deployment to trackless wastelands is more expensive than lift. Analysts estimate $55k-85k per ton under starship. (Elon estimates much lower; let's stick with low end of analyst numbers).

                  Trackless wastelands are really hard to get to. For instance, I've seen a fuel truck tipped over on its side in a river next to a small tow truck tipped on its side in a river next to a larger crane trying to rescue the original truck and the "rescue" truck in Southern Kenya -- by no means a trackless waste -- probably a week long ordeal, JUST for diesel delivery. This was in an area under former British rule with roads and stuff.

                  Second, trackless wastelands are really hard to find. There are people everywhere, man. And they like free metal, free power, etc.

                  If we imagine instead just deploying to West Texas, I think the square footage does add up. 40 foot container -> call it 16 racks. Nvidia estimates 600kw per rack in 2027 with Vera Rubin(!!JFC!!). So, 10MW of power per container. Let's imagine we magically found water in West Texas and have a PUE of 1.2, so 12MW. Solar panels are like 20 W/sq ft.

                  I got lazy; Claude tells me with 2.5x land needed for spacing, infra, etc, 6.5 peak sun hours, a couple of acres for storage, roughly 130 acres (0.2 sq miles) + 53 Tesla megapacks for storage per container.

                  I'll revise my above thoughts - there is NO WAY it's cheaper to do that in trackless wastes than space. I don't know about west Texas, but I don't think it's crazy to think that you might want to spend five years on engineering and production scaling instead of town and county and state and federal permitting.

                  • CamperBob2 12 hours ago ago

                    Granted, some compelling points against the "trackless wasteland" plan. All of them sound pretty valid to me.

                    Oceans, though -- we know how to deal with saltwater environments, we've done that for a while now. A key point is that anything you send into space or install near saltwater isn't going to last long without either regular maintenance or high up-front expense. But in this case, the equipment only has to last a few years until it's obsolete anyway, and ~5% FIT is probably tolerable. So I maintain that it's doable.

                    One good thing about an ocean-based platform is that it makes the heat dissipation problem go away virtually for free.

                    None of the challenges of running a 10 MW container full of hardware go away in space (other than the threat of nomadic scavengers, I suppose.) Yes, space-based PV arrays are smaller and lighter... but that's it, big deal. In particular, the idea of getting rid of that much heat in space without the benefit of convection, conduction, acres of expensive radiators, or magic is beyond my ability to comprehend, much less address. Everything having to do with heat removal is much harder in space.

                    So, given that you aren't going put 10 MW worth of hardware in a single satellite anyway, it doesn't seem valid to compare such installations on an equal basis as you're doing here. The 130-acre site you mention doesn't replace one satellite, it would probably replace a thousand of them.

                    You get a lot of expensive redundant requirements when you split up the problem that way, as well. These requirements will eat up any savings you might get from space-based deployment. Instead of one communications link with expensive RF hardware, you now need a thousand. Likewise, it's cheaper to build one 10 MW power substation than a thousand independent 10 kW power management solutions. And remember, this is all to support a single shipping container worth of hardware.

              • 2 days ago ago
                [deleted]
          • ActorNightly 2 days ago ago

            >Elon makes a relatively good case in the Dwarkesh podcast.

            Are we still going to pretend that the man who has gotten every single prediction wrong so far knows what he is talking about?

            • yndoendo 2 days ago ago

              He has never made a good case only coherent stories people believe.

              He has been saying self driving cars are right around the corner 10+ years by using a staged video.

              I will never forget this statement; _I don't know anything about EVs so when he talked I believed him. I don't know anything about rockets so when he talked I believed him. I most defiantly know about software development and when he opens his mouth I know he is lying."

              Still don't get what people see in him. Deep down he is not a good person and will say anything to pump up his image and stock.

              P.S. Number of his fans like to down vote people for calling him a bad person.

            • bobsmooth a day ago ago

              Are we still going to pretend that the man who has revolutionized at least 2 different industries doesn't know what he's talking about?

              • ActorNightly 7 hours ago ago

                He didn't revolutionize shit. He just threw enough VC money and paid the right people enough to eventually make a product that sticks. It took Space X multitude of crashes, downright scamming their suppliers, and lots of turnover to do something that other companies did in a few years. And the Starship is just laughably stupid.

                And Tesla's only success is because they were subsidized like crazy. Of course people are going to purchase cheap electric cars with no maintenance. If BYD was allowed to operate in US, Tesla would have been under ground long time ago.

                But I get your sentiment though. You are so far down the conservatism rabbit hole and probably have some inappropriate thoughts towards children, so you have to defend Musk till you die because god forbid you admit to yourself that you are terrible human being.

          • tartoran 2 days ago ago

            How is cooling though?

            • vessenes 2 days ago ago

              Yeah I wonder the same thing - I keep getting told heat management in space is hard, but nobody discusses this inre the data centers. My understanding is one cooling mechanism is to just shoot lasers out into space (is this sci fi?) - I guess in that case you could just send energy back to your solar rigs, depending on wavelengths. TLDR: no idea

              • kybernetikos a day ago ago

                I understand that in earth based data centers, 30-40% of the power is spent on cooling. That's in facilities that can cool using conduction and convection to the outside environment.

                I don't have any experience in this area, but it seems like for every square meter of solar panel you need about half that in radiator area. And depending on your orbit, these are probably not static things just sitting there, they need to be orientated correctly to work and their correct orientations will change over time.

                The worry for me is the level of human maintenance required. The ISS has probably the biggest solar array around, and they send humans out to perform maintainance and repair on it multiple times a year. A decent size data center would need an order of magnitude more solar and radiators than the ISS, and so presumably would need even more maintenance.

              • tartoran 2 days ago ago

                The whole thing is pie in the sky same as landing people on Mars. It's cool but if you look into deeper it doesn't make much sense and it's extremely challenging and on top of it all expensive as hell.

          • imiric 2 days ago ago

            You forgot 5: SpaceX has a monopoly on deploying satellites to LEO, with practically unlimited room for growth, and far less red tape and obstacles than anywhere on Earth. Whatever R&D and operational costs this insane engineering feat might have are offset by their market advantage, and Musk's Elizabeth Holmes-ian capability to fund his projects, in addition to relying on his own personal wealth and all of his other companies combined.

            The fact that this lunatic is polluting humanity's view into the universe mainly for enriching himself and his shareholders, and that everyone is playing along with this, is sickening.

          • 2 days ago ago
            [deleted]
          • skywhopper 2 days ago ago

            Every one of those points is false or an outright lie, though.

      • JumpCrisscross 2 days ago ago

        > What is the benefit of "moving compute to space"?

        I’ll bite. It’s cheaper and quicker to permit a launch than permit, zone and interconnect a datacenter. And solar panels in space don’t need glass cladding, which makes them cheaper to make and lift.

        The downside is launch cost. But there is a breakeven between these factors that seems to have most of its error bars within Starship’s target. (By my math, around $35/kg.) So if Starship works, and all indications seem to show that it will, eventually, then that puts space-based data centers at cost parity with terrestrial ones within a decade. Which was, well, unexpected when I ran the numbers.

        (The surprising finding when you run the numbers is launching the chips and solar panels isn’t the limiter, it’s launching the radiators. Which opens up whole new questions about at what scale it makes sense to stop sending those up the well.)

        • tzs 2 days ago ago

          > It’s cheaper and quicker to permit a launch than permit, zone and interconnect a datacenter

          There's plenty of empty land sufficiently far from cities and not being used for anything else and that shouldn't have permitting or zoning problems.

          For interconnect do that via satellite.

          • JumpCrisscross 2 days ago ago

            > plenty of empty land sufficiently far from cities

            Which means interconnect permitting.

            > For interconnect do that via satellite

            As in power.

            • tzs a day ago ago

              Ah, I was unclear. I meant build in empty land far from cities where you also have room to put in enough solar panels and batteries to power the data center.

              • JumpCrisscross a day ago ago

                > where you also have room to put in enough solar panels and batteries to power the data center

                Environmental reviews. (The further from civilization the higher the chances the Southern farting nuknuk or whatever nests in your nowhere.) And construction costs.

        • skywhopper 2 days ago ago

          The capacity of a single datacenter would require thousands of launches to get the equipment into space. I don’t believe for a second that this would be easier in any way. Cooling and bandwidth are also completely unsolved for compute on a useful scale.

          • JumpCrisscross 2 days ago ago

            > capacity of a single datacenter would require thousands of launches to get the equipment into space

            But that equipment starts generating compute as soon as it’s up. This dramatically increases the capital efficiency of the venture. (Though space launch is still ultimately capital intense. Lower rates go, the more attractive it becomes.)

            > Cooling and bandwidth are also completely unsolved

            Quite wrong. (Though I was surprised by this, too.) ISS-style radiators (14 kg/kW) require Starship’s most optimistic launch cadences to make economic. But sub 10 kg/kw, which is closer to ISS heritage than any of the newer stuff, lets $100/kg to LEO work under most circumstances. Drop it to 6 kg/kW and even Falcon 9 becomes viable for low costs of capital (<3%) and 4-year permitting and build times.

            Bandwidth is a problem, but an engineering one. (And one Starlink is working on with laser backhaul.)

        • danenania 2 days ago ago

          What about maintenance? I’d naively assume that’s the killer.

          • JumpCrisscross 2 days ago ago

            > What about maintenance?

            Simply put, you don’t. Your DC is launched into its graveyard. If a chip burns out it burns out—maybe rack design is a bit more redundant to keep failures as independent as possible.

            Maybe at some point repair is a valid optimization. But it’s not necessary for an MVP, namely, one that is competitive against 3 to 5-year terrestrial delays and sub-10% costs of capital for such projects. That’s what has surprised me.

            • danenania a day ago ago

              It seems like that could change the math quite a bit, since you’d presumably be losing a lot of capacity to failures. I’d assume you would have a much higher failure rate in space, and component failure is already pretty common on earth.

      • layer8 2 days ago ago

        That xAI fails faster, hopefully.

      • ActorNightly 2 days ago ago

        Not having to deal with having to defend in court why polluting an area where you built your datacenter and fucking it up for the residents there is actually better for all man kind.

      • Fricken 2 days ago ago
    • 2 days ago ago
      [deleted]
  • dang 2 days ago ago

    I couldn't find a working archive link for the ft.com article - anyone?

    Since it's the original source I've left it up, but added other URLs to the toptext.

    • 2 days ago ago
      [deleted]
    • natebc 2 days ago ago

      I sent it to archive.ph here:

      https://archive.ph/rP4cb

      and it has the content but the formatting is atrocious.

      HTH.

      • dang 2 days ago ago

        Better than nothing - added above. Thanks!

  • autodate 2 days ago ago

    [dead]

  • dang 2 days ago ago

    [stub for generic-indignant tangents - not what this site is for - please see https://news.ycombinator.com/newsguidelines.html]

    • throwaway2027 2 days ago ago

      Elon is such a clown, he keeps posting salty tweets about Anthropic, Claude Code, OpenAI and Codex yet has no competing product.

      • charlieflowers 2 days ago ago

        He's about to have the most compute. Wonder if he can do anything noteworthy with it.

    • rishabhaiover 2 days ago ago

      These kind of HN submissions test how fair discussions can be here:

      > Please don't use Hacker News for political or ideological battle. It tramples curiosity.

      Reference: https://news.ycombinator.com/newsguidelines.html

      • Ar-Curunir 2 days ago ago

        Elon is literally a political figure. How is one supposed to discuss his actions without invoking his politics?

        • dang 2 days ago ago

          discuss != battle

          • Ar-Curunir 2 days ago ago

            In the context of what Elon has done, the only real discussion should be condemnation. If that leads to Elon fans feeling embattled, well, they should get better role models to look up to.

            • dang a day ago ago

              It's not what this site is for, and destroys what it is for. Preserving the community has to take precedence.

      • johnnyanmac 2 days ago ago

        So, it utterly fails? A good part of the community still seems to be stuck in 2017 where Elon could do no wrong.

        Turns out a lot of not just wrong, but malice could be done in 9 years. And worse yet, incompetent malice. I don't know why that has to be a political statement these days, but thems the brakes here.

        • snackerblues 2 days ago ago

          Please don't use Hacker News for political or ideological battle. It tramples curiosity.

          • johnnyanmac 2 days ago ago

            And I repeat

            > I don't know why [This person done bad actions] has to be a political statement these days, but thems the brakes here.

            Thanks for proving my point.

            • solid_fuel a day ago ago

              They’re a troll account, only a few days old.

            • snackerblues 2 days ago ago

              You can always go back to Reddit

              • johnnyanmac 2 days ago ago

                > When you burn enough bridges, the only way to move is forward

                I don't "move back", only move forward to the next community. Until that is compromised. Then the cycle repeats anew.

                I wonder which community, if any, will break that cycle.

      • 2 days ago ago
        [deleted]
      • kubb 2 days ago ago

        Is it politics or ideology to recognize the flawed character of someone? How cultish his following is? His erratic behavior, the damage that he's doing?

        Some people will cry "politics" just to take the voice away from those who dare to question their beloved celebrities.

        • baublet 2 days ago ago

          Yeah and it’s not our fault every Elon discussion involves politics. It’s literally all he does all day, and all he seems interested in, anymore.

      • mathisfun123 2 days ago ago

        [flagged]

      • croes 2 days ago ago

        They trample science, the Paradox of tolerance in action.

        Who fights can lose, who doesn't fight has already lost.

      • throw_m239339 2 days ago ago

        > Please don't use Hacker News for political or ideological battle. It tramples curiosity.

        That ship has sailed a long time ago, with the approval of the moderation itself.

        • dang 2 days ago ago

          That's excellent modbait, but of course what you say is the opposite of what we approve.

          It's is a complex and hard question, but the principles we apply to it have been around for a long time and are consistent with the site guidelines. If they weren't, we'd change the latter.

          I've explained all of this many times. If you, or anyone, would like to know how we approach the question, you could start here:

          https://hn.algolia.com/?dateRange=all&page=0&prefix=false&so...

          • throw_m239339 2 days ago ago

            I simply disagree, you know what topics I flagged, I'm not trying to bait you or any other moderator, and will discuss the matter no further.

        • 0xpgm 2 days ago ago

          Yup, since around 2016 HN and other tech spaces got infested with people who cannot separate their political ideology from technical discussions.

          When it comes to FOSS they claim that FOSS has always been political to justify the politicization of everything they touch.

          Things used to be much better when the people adhered to the age-old wisdom "Keep politics and religion out of the office" and carried this attitude to neutral spaces online.

          In part, some of us got into tech because it was one of the places where meritocracy ruled and you could get away from those who thrive by overwhelming others with BS.

          I apologize for the rant.

          • datsci_est_2015 2 days ago ago

            Being “apolitical” is a luxury of the privileged, especially in turbulent times.

            True tests of courage, morals, and ethics are occurring more and more every day now, especially in the tech industry that is so closely intertwined with the regimes across the world who seek to cause great harm to those who do not look like, speak like, or believe in the same things as them.

            "The only thing necessary for the triumph of evil is for good men to do nothing" - there’s your quote for political apathy.

    • I_am_tiberius 2 days ago ago

      [flagged]

      • halfmatthalfcat 2 days ago ago

        [flagged]

      • selkin 2 days ago ago

        Many wouldn't, but some people share his values, and given the compensation, it makes saying "no" much harder. Money may not be the most important thing in life, but it does make them extremely easier to live.

      • pelorat 2 days ago ago

        Same, I earn 60K as a senior, but I would never accept a 200K+ position at xAI.

        • yndoendo 2 days ago ago

          As an US Citizen, you have to pay me to engage with Elon Musk's businesses. He is not a good person and does not deserve respect or admiration.

          • daveguy 2 days ago ago

            As a US citizen, you couldn't even pay me to engage with Elon Musk's businesses. He is not a good person and does not deserve respect or admiration.

      • johnnyanmac 2 days ago ago

        We had cabinet members for this administration call Trump a nazi months prior to the nomination. People give up all kinds of morals for financial gain. That was always true, but it's become outright blatant this past decade.

      • weirdmantis69 2 days ago ago

        You wouldn't want to work for a genius? Probably the most significant person alive today?

        • troosevelt 2 days ago ago

          I don't think he's a genius but if he is, it'd still be underneath my standards.

        • matsemann 2 days ago ago

          I can think of lots of significant people I wouldn't work for..

        • davidwritesbugs 2 days ago ago

          Get down to A&E quick, you've clearly drunk a potentially fatal amount of Elon KoolAid. Musk is a buffoon. Clever? yes by all accounts, genius? Hardly. He's had luck, made good judgments mostly offsetting the bad ones. Most of all he has enough money to power through errors that would bankrupt thee & me.

        • rf15 2 days ago ago

          Evidently not genius enough to not have his car business and global image fail. Genius he might be, but he's only entrenching his position in a way not dissimilar to cults: by alienating a lot of people you can get loyalty from a selected few. If that's the kind of power he wants, sure, he's a genius. But a good businessman is something else.

        • InsideOutSanta 2 days ago ago

          Let's assume that you are correct. How is that relevant to how good he is as an employer? There are lots of people in history who were very significant and perhaps geniuses in some way that I wouldn't want to work for in a billion years.

        • 2 days ago ago
          [deleted]
      • sourcegrift 2 days ago ago

        [flagged]

    • LightBug1 2 days ago ago

      Elon Musk is a generic-indignant tangent wanker and not what this site is for.

      Thanks for providing a space for me to say that.

    • epolanski 2 days ago ago

      tbh I wouldn't give Elon a dime even if Grok was miles better than competition.

      • dang 2 days ago ago

        Ok, but please don't post unsubstantive comments here.

        • epolanski 2 days ago ago

          Is it?

          Elon's persona caused massive drops in usage of twitter, sales of Tesla, etc.

          Unsurprisingly many would not touch grok for the same distrust.

          • dang 2 days ago ago

            Tastes differ, of course, but a comment consisting of nothing more than "I wouldn't give $so-and-so a dime even if $such-and-such was $this-or-that" definitely counts as unsubstantive by the usual standard here.

        • davidw 2 days ago ago

          This is not a fully formed thought, so take it with a grain of salt:

          Keeping politics off of here is a good idea.

          Some things aren't really politics, but morals. Like, a discussion of different tax schemes or how much environmental regulations accomplish what they set out to do or something is 'politics'. Lamenting that there is "no homeland for white people" is... something else.

          It's probably still not likely to have good outcomes as a subject of discussion here, but it's also something the tech industry needs to wrestle with somewhere, somehow.

          My experience of the tech world was that it went from being a collection of oddballs, geeks, nerds and maybe kind of naive politically to mainstreaming some really evil shit.

          I think this will come back to bite the industry, and depending on how angry the people with pitchforks and torches are, could end up hurting more than just the bad actors.

      • maxwell 2 days ago ago

        Would you give one to Sam, Mark, or Sundar?

        • pupppet 2 days ago ago

          What does our system say about itself when people of integrity so rarely rise to the top?

          • EricDeb 2 days ago ago

            I dont know too much but Jensen Huang seems like a good guy

        • lobf 2 days ago ago

          None of these guys literally has the blood of millions of people on their hands.

          Elon’s gutting of USAID (and you can argue they would have done it anyways but he chose to be the executioner) will kill millions of people every year who otherwise would not have died.

          Not only will I never give him a dime, I want him prosecuted and deported.

          Edit: For those downvoting, we're already at an estimated 600k deaths: https://www.impactcounter.com/dashboard?view=table&sort=inte...

      • knicholes 2 days ago ago

        Why?

        • 2 days ago ago
          [deleted]
        • epolanski 2 days ago ago

          He's very hard to like, and he's hard to trust with anything.

        • skywhopper 2 days ago ago

          Because Elon is a criminal scam artist and a horrifying racist who seems to be completely detached from reality.

          • z3ratul163071 2 days ago ago

            if it weren't for HN i would get a glimpse how life is on bluesky

          • Layvier 2 days ago ago

            this.

          • SunshineTheCat 2 days ago ago

            I really wish the days of kindergarten where we were taught if you didn't have anything nice to say about someone, don't say it at all.

            • lobf 2 days ago ago

              Sounds like giving a pass to bad people who might face criticism.

            • rexpop 2 days ago ago

              If this is how you feel about oligarchs, well... I guess don't have anything to say.

        • reactordev 2 days ago ago

          Moral grandstanding on the account of his political views and the fact that he does Nazi salutes on stage, on TV, for the world to see… might have something to do with it.

        • misiti3780 2 days ago ago

          [flagged]

    • fishcrackers 2 days ago ago

      [dead]

    • cboyardee 2 days ago ago

      [dead]

    • zombiwoof 2 days ago ago

      [dead]

    • heliumtera 2 days ago ago

      [flagged]

    • spprashant 2 days ago ago

      He is re-building a company that he himself built less than 3 years ago?

      • randallsquared 2 days ago ago

        Elon has less regard for sunk costs than most corporate leaders.

        • LightBug1 2 days ago ago

          Ironically, he's the sunk cost.

      • coliveira 2 days ago ago

        [flagged]

        • dang 2 days ago ago

          You've been a good HN user for many years, but lately your comment history has swerved towards ideological battle generally, and unsubstantive flamebait like this post. Can you please swerve back? It's not what this site is for, and destroys what it is for.

          https://news.ycombinator.com/newsguidelines.html

          Edit: before someone pounces, no, I'm in no way defending either E. Just trying to hold up HN.

          • Herring 2 days ago ago

            [flagged]

            • natch 2 days ago ago

              [flagged]

              • throwaway5752 2 days ago ago

                Musk

                * gave a Nazi Sigg Heil salute (twice) at a political even, on video. Famously.

                * has consistently supported a German political party that re-uses Nazi slogans, minimizes or outright denies the Holocaust, minimizes the criminality of the SS

                * frequently and consistently upvotes posts on X echoing white supremacist and Nazi ideology on his social media site

                * owns the most popular site for neo-Nazis

                To say "is not backed up by any kind of connection to reality" is actually verifiably false. I can't say anything about the other words, but there is evidence for miles that he is sympathetic to Nazi ideology.

                And this is directly relevant here. It can't be ignored when you are talking about his business, or you have an elephant in the room. His personal flaws and meglomaniacal executive style are a package deal.

                • natch 2 days ago ago

                  [flagged]

                  • BoredPositron 2 days ago ago

                    Reality is that which, when you stop believing in it, doesn't go away.

                  • hermanzegerman 2 days ago ago

                    Okay, so what you're saying is that you don't have an argument to back up your opinion and just ignore the other arguments because they could endanger your opinion

                  • throwaway5752 2 days ago ago

                    They aren't specious, though. There's ample public record evidence. You can't rebut them because they are part of the historical record.

                    https://www.youtube.com/watch?v=joV-9FFoA3Q is the "Sigg Heil" video. Anyone can see with their own eyes.

                    https://www.npr.org/2025/01/27/nx-s1-5276084/elon-musk-germa... is where Musk says, "Frankly too much of a focus on past guilt and we need to move beyond that. Children should not be guilty of the sins of their parents, let alone their parents, their great-grandparents." - referring the Holocaust, just 80 years ago, in which 13 million people were systematically rounded up, placed in concentration campls, and mass murdered by the government, including 6 million Jewish people.

                    https://www.theguardian.com/technology/2023/nov/16/elon-musk... "You have said the actual truth"

                    Regarding Twitter / X, after he took over:

                    According to data provided by the research company Memetica to The New York Times, in the past month, Elon Musk's platform featured 46,000 posts with the hashtag #HitlerWasRight, compared to an average of less than 5,000 posts per month in previous months (an increase of 820%). Posts with the hashtags #DeathtotheJews or #DeathtoJews appeared 51,000 times in the last month, marking a surge of 2,450%.

                    This is the guy claiming to try to make a trustworthy foundational model. There are deeper reasons for Grok's market share problems than the founding team or coding capability. You can't talk about this event and ignore it. He's trying to take Space X public and it's only going to get worse. His personal brand is dragging down his companies, as far as I can tell Tesla has lost 25-50% of their EV market share in Europe in the past 2 years? The problem is not just BYD.

                    This ignores his publicly acknowledged drug use that has led to tension with his boards of directors https://www.wsj.com/business/elon-musk-illegal-drugs-e826a9e...

            • BigTTYGothGF 2 days ago ago

              It might be a nazi bar, but it's a high-class fancy kind of nazi bar like you'd find on the Hindenburg, and that's more important.

              • slater 2 days ago ago

                Does that mean we get to throw the Nazis out the Hindenburg's window, cos they lack tickets?

                • BigTTYGothGF 2 days ago ago

                  Dr.Jones (either one) is too uncouth to be allowed in here.

              • Herring 2 days ago ago

                Jesus...

            • BoredPositron 2 days ago ago

              He is not wrong here dang and you are keen to scold one side of the bar more than the other in the last few months. This thread is a good example there wasn't even anything in the comments and it got a sticky real quick.

              • ThrowawayTestr 2 days ago ago

                I come to HN because I don't want to read reddit-tier comments like the above.

                • BoredPositron 2 days ago ago

                  [flagged]

                • throwaway5752 2 days ago ago

                  The grandparent comment is from someone that has been on this site almost since the beginning. Far longer than you. They might have insights about the community that you do not.

                  • ThrowawayTestr 2 days ago ago

                    That's even worse, they should know better

                    • johnnyanmac 2 days ago ago

                      Integrity means understanding when your community if falling into the snares of its own rules. What's the point of formal, nuanced discussion if it's used to empower hate?

                      • throwaway5752 2 days ago ago

                        I agree. For whatever reason, whether it is a change in community sentiment or something else, I get downvotes talking about this. I can't think of a more useless thing to care about. Even if I cared, I couldn't think of a better way to spend accrued social capital.

                        The hold that scumbags - influencers, political operatives, narcissists with Messiah complexes - have taken over young men, particularly those inclined to go into the software field, is alarming.

                        Various former founding members of PayPal, leaders of the companies they've founded subsequently, member of A16Z and some opportunists and hangers on to these wealthier individuals who are beneath the dignity of mentioning separately - they have lost their moral and ethical moorings in the course of accumulating massive wealth and they are corrupting others in doing it.

                        Even modest startup incubators are be obsequious to the wealth and power of these people in the field and decided that money is more important than morality. Or at the very least there is no pretense of it now.

              • natch 2 days ago ago

                Not true in general on HN about one side only; it happens to all sides imho. But if you wanted to measure, you would have to normalize by the total number of occurrences on each side, and there is a lot of passive aggressive wording so the measurement would be easy to do badly.

                To the extent that discussion of discussion is considered boring, perhaps this will get shut down too, but I think it was important to counter your claim.

              • johnnyanmac 2 days ago ago

                I'm not using surprised by the moderation direction here (formality above all). But stubs tend to be rare and I've never seen a stub develop over 2 top level comments. Even the most blatantly political posts don't get such treatment (or it takes a long time do so).

                I'm sure it's common for dead flagged posts, but it seems this story was too significant to pull over that smoke screen this time.

            • johnnyanmac 2 days ago ago

              We're sadly well past friends of friends of friends coming in. At some point the only thing you can do as a non-bartender is to simply leave and never come back.

              I don't want to say we're at that point just yet. But it's something that's been gnawing at me for a while now. I've certainly been disillusioned of this being a progressive tech hub interested in bettering humanity.

              • natch 2 days ago ago

                Bettering humanity is a pretty good two word summary of what should be the meaning of life and everyone’s goal. Please don’t disengage.

                • johnnyanmac 2 days ago ago

                  >Please don’t disengage.

                  No worries. I was more referring to this community rather than this society. There are plenty of other "bars" on the internet to choose from.

    • chairmanwow1 2 days ago ago

      [flagged]

      • KennyBlanken 2 days ago ago

        So you deeply admire a man who threw a temper-tantrum when his giant box designed by a bunch of people with no experience in anything underwater or rescue, much less underwater rescue - and was deemed unusable to rescue people from an underwater cave with passages so small divers had to remove their gear and push it ahead of them? And repeatedly, directly, said the lead rescuer was a pedophile?

        You deeply admire a man so unable to restrain his ego and temper that much of his production team at Tesla quit, some right to his face, because they couldn't meet his nearly impossible goal of extreme levels of automation on the Model 3 production line? Which, if all else is ignored, cost Tesla billions in delays because of his demands?

        You deeply admire a many who is vehemently racist and misogynistic?

        You deeply admire a man who latches onto just about any conspiracy theory?

        You deeply admire a man who is so desperate for attention he unblocks himself from Twitter users' accounts?

        You deeply admire a man whose companies were under investigation by nearly every federal enforcement agency there is?

        You deeply admire a man who has failed to meet the vast majority of his own publicly stated benchmarks?

        And who engages in PT Barnum levels of bullshit, like having "AI robots" that are actually just robots piloted by unemployed actors?

        The man is a pathological liar who has failed upward not because of some sort of unique talent or skill, but because he's extremely abusive and willing to break any regulation or law he sees as inconvenient.

  • SadErn 2 days ago ago

    [dead]

  • 3327 2 days ago ago

    [dead]

  • beezlewax 2 days ago ago

    [flagged]

  • quater321 2 days ago ago

    [flagged]

  • antonvs 2 days ago ago

    dang wrote:

    > You may not owe you-know-whom better, but you owe this community better if you're participating in it.

    This is like telling a country that’s being invaded that they can only respond with strongly worded letters when their enemy is dropping tactical nukes on them.

    But hey, Paul Graham and cronies benefit from the status quo as much as any other billionaire, so let’s not rock the boat, right?

    The word “complicit” comes to mind.

    • dang 2 days ago ago

      It's nothing like that. We're just trying to have an internet forum that manages to stay slightly more interesting than internet mean. Or, if you like, that doesn't burn itself to a crisp and turn into scorched earth. It is rather a modest goal. I think there is a place for such a website, and I believe most HN readers do too.

      • itomato 2 days ago ago

        It's something that many readers feel should be burned to a crisp.

        Let it be.

        • dang a day ago ago

          People love to say such things, but actual behaviors are more complex than that.

  • gkfasdfasdf 2 days ago ago

    The grok button on twitter is pretty awesome. Instantly summarize / explain any tweet, even memes, including replies. Ask follow up questions. Not sure many people know it's there.

    Also grok in the Tesla is fun, get answers to questions without looking at a phone. I once had it search up a blog post and read it out to me while driving. The NSFW mode is pretty...disgusting so I leave that off.

    I hope they find a way with Optimus or something. FSD is incredible. More competition is a good thing.

  • tim-tday 2 days ago ago

    There should be a social theory study of jerks and people who continue to work for them. I’d imagine there’s a sort of logarithmic drop off. It takes time for the knowledge of terrible bosses to work through the labor force, but tech is a small community (of hopefully smart people) and people start to catch on.

    A story I heard from a Tesla employee was that it’s impossible to hire because musk is mercurial and every time a hire makes it through the pipeline a hiring freeze cancels all offers. Another story that employees were told to avoid his desk because he randomly fires people. Another that he regularly cancels bonuses because he’s feeling petulant. I heard another story that he threw up Nazi salutes on an internationally televised event. No wait, that one I watched happen.

    After a while smart people will simply decline to work for him no matter the compensation. Not everyone, but enough to start to matter.

    • bwfan123 a day ago ago

      > After a while smart people will simply decline to work for him no matter the compensation. Not everyone, but enough to start to matter.

      Among people there are 2 types:

      1) Those that gain self-esteem by associating themselves with leader-like figures who project visions of the future. This explains cults, religious and political groups and the like. This comes from our ape-dna where the alphas are blindly worshipped.

      2) Those with enough cognitive development and independent thinking who can avoid their ape-brain getting short-circuited into worshipping the alphas.

      Majority of people are in group 1) and my hypothesis is that they form a vast-majority of the customer and employee base of musk cos.

  • blueaquilae 2 days ago ago

    Yes 11 up and everyone why free insult on a model that top adoption. Aligned with your personal view is not ahead of the curve, it's just personal.