NHacker Next
  • new
  • past
  • show
  • ask
  • show
  • jobs
  • submit
OpenAI looked at buying Cursor creator before turning to Windsurf (cnbc.com)
OxfordOutlander 3 hours ago [-]
It makes sense for OpenAI to overpay for wrapper companies that have distribution - a good analogy is British pub (bar) companies. By mid 2000s they were struggling. Low margins, rising cost base, expensive debt.

What saved them? Heineken. They didn't care if the pubs made much of a profit - they made positive margins on having people drink their beer at market prices. They just wanted to increase volume. So they bought up several major players. In 2008 they acquired Scottish & Newcastle's operations, later thought bought Star Pubs & Bars, which had 1,049 leased and tenanted pubs, and finally half of Punch Taverns.

The same strategy can work for OpenAI - buy up the wrapper companies, and make sure YOUR models are served to the user base.

abxyz 5 minutes ago [-]
An apt analogy, given...

"The company must change its mindset and become proactive in its approach to compliance. I have decided this can best be achieved by the imposition of a sanction that will serve as a deterrent to future non-compliant conduct by Star and other pub-owning businesses."

https://www.gov.uk/government/news/heineken-pub-company-fine...

bluelightning2k 19 minutes ago [-]
Nice analogy. Although a simpler way to say it is simply vertical integration, a known term for the phenomenon with a whole class of benefits.

One of those benefits brings to mind another analogy: Apple. The ai model and the tooling are kind of like the hardware and software. By co-developing them you can make a better product and certainly something hard to compete with.

somerandomness 59 minutes ago [-]
VS Code fork 'default model' is the new default search engine
justanotheratom 3 hours ago [-]
Recently, OpenAI CFO Sarah Friar said,

"We have something called A-SWE, which is the Agentic Software Engineer. And this is where it starts to get really interesting because it’s not just about augmenting a developer, like a Copilot might do, but actually saying, ‘Hey, this thing can go off and build an app.’ It can do the pull request, it can do the QA, it can do the bug testing, it can write the documentation."

https://www.youtube.com/watch?v=2kzQM_BUe7E The relevant discussion about A-SWE begins around the 11:26 mark (686 seconds).

maronato 1 hours ago [-]
On the few times I've used Cursor or Claude Code for tasks beyond simple tests or features, I found myself spending more time correcting their errors than if I had written the code from scratch.

I like Cursor and use it daily, but none of its models are even close to being able to take nontrivial work. Besides, it quickly gets expensive if you’re using the smarter models.

IMO these AI tools will become to software engineers what CAD is to mechanical and civil engineers. Can they work without it? Sure, but why would they?

theturtletalks 1 hours ago [-]
This is because Cursor is not sending the full context even when you drag and drop things inside the chat box.

I started getting worse results from Cursor too. Then, Gemini 2.5 Pro dropped with 1M context, I repomixed my project, popped it into AIStudio, and asked it make me prompts I can feed into Cursor to fix the issues I have.

Gemini has the whole picture and the prompts it creates tell Cursor which items to change how.

foobiekr 3 hours ago [-]
And you believe this?

Surely then they have no swe reqs right?

justanotheratom 3 hours ago [-]
just stating facts.
stefan_ 2 hours ago [-]
The fact being that Sarah "CEO of Nextdoor" Friar stated they have this A-SWE vapor thing? I don't see how it pertains to what the said being factual.
3 hours ago [-]
Jcampuzano2 3 hours ago [-]
If they have this why are they hiring, and how much code for openAI itself has it written.
Drakim 3 hours ago [-]
...don't get high on your own supply?

It's pretty obvious that these tools are not replacements for developers as of yet. I've tried them, and they are very nifty, and can even do some boring tasks really well, but you can't actually substitute actual developer skill (yet). But everybody is holding their breath because it looks like they might eventually reach that level, and the time-frame for that eventually is unknown.

Jcampuzano2 2 hours ago [-]
But thats exactly what she marketed it as and made the claim it already exists, an agent for hire that can do everything a SWE can do.

If this truly exists they'd have no need to hire since it'd force multiply their existing developers.

What better marketing than being able to proudly claim that "OpenAI no longer hires those pesky expensive developers and you can too" because they can improve/multiply the productivity of their existing developers with their innovations.

sandeepkd 1 hours ago [-]
Looks like there are engineers with few years on their belt who feel super excited about the capability of these tools and then the executives and business people who have to toe the line since everyone else is doing it. And then on the other hand there are engineers with multiple years of field and subject expertise who are skeptical about the advertised capabilities, however they are either quiet or in the wait-n-watch mode to see how it plays out.

As some one who has been both in engineering and management roles, I feel the manager role (not all but a lot of managers are just information pass through and tool would be more consistent for it) should be relatively easier for the automation. Bit surprised how no one talks about that as a possibility?

darth_avocado 2 hours ago [-]
The question isn’t “can AI code?”, the question is “can AI keep coding?”.

How do any of these companies create “an AI Software Engineer”? Scraping knowledge posted by actual engineers on StackOverflow? Scraping public (& arguably private) GitHub repos created by actual engineers? What happens when all of them are out of a job? AI gets trained on knowledge generated by AI? Where will the incremental gain come from?

It’s like saying I will teach myself to cook better food by only learning from recipe books I created based on the knowledge I already have.

fragmede 1 hours ago [-]
> AI gets trained on knowledge generated by AI?

This sounds like the ouroboros snake eating its own tail, which it is, but because of tool use letting it compile and run code, it can generate code for, say, rust that does a thing, iterate until it's gotten the borrow checker to not be angry, then run the code to assert it does what it claims to, and then feed that working code into the training set as good code (and the non-working code as bad). Even using only the recipe books you already had, doing a lot of cooking practice would make you a better cook, and once you learn the recepies in the book well, mixing and matching recepies; egg preparation from one, flour ratios from another, is simply just something a good cook would just get a feel for what works and what doesn't, even if they only ever used that one book.

wobblyasp 3 hours ago [-]
If they actually had something like that they'd stop talking about it and release it.

Until we play with it, it doesn't exist.

3 hours ago [-]
rchaud 2 hours ago [-]
Out of interest, why is the CFO the person commenting on this opposed to a product person?
quantumHazer 3 hours ago [-]
All I will say is that she is the CFO of a company that want to sell """agentic""" swe models.

edit: typo.

3 hours ago [-]
fuzzy_biscuit 3 hours ago [-]
Vaporware until it isn't.
nopinsight 3 hours ago [-]
OpenAI‘s early investment in Cursor was a masterstroke. Acquiring Windsurf would be another.

Next advances in coding AI depend on real-world coding data, esp how professional developers use agentic AI for coding + other tasks.

RL works well on sufficiently large base models as shown by rapid progress on verifiable problems with good training data, e.g. competition math, competitive coding problems, scientific question answering.

Training LLMs on detailed interaction data from AI-powered IDEs could become a powerful flywheel leading to the automation of practical coding.

kylehotchkiss 3 hours ago [-]
How many developers want to have usage analytics of their editors helping companies build functionality that aspires to replace them? This is silly.
palmotea 3 hours ago [-]
> How many developers want to have usage analytics of their editors helping companies build functionality that aspires to replace them? This is silly.

Honestly, too many. Software engineers can be really, really dumb. I think it has something to do with assuming they're really smart.

But even unwilling developers may be forced to participate (see the recent Shopify CEO email), despite knowing full well what's going on. I mean, tons of people have already had to go through the humiliation of training their offshore replacements before getting laid off, and that's a much more in-your-face situation.

xmprt 2 hours ago [-]
Developers know that AI will replace some of their coworkers. But only the "bad" ones who can't code as well as them. AI will learn from all of their good code and be used to generate similar code that's better than the bad devs but not as good as theirs. The problem is that every developer thinks that the coding bar is going to be just barely below their skill level.
palmotea 34 minutes ago [-]
Exactly. Software engineers can be really, really dumb.
xp84 3 hours ago [-]
The frustrating part is that this is another area where the realities of capitalism seem misaligned with anyone's well-being. Nobody wants to be out of a job, but doing the opposite of the Shopify CEO's strategy, like severely restricting AI usage, looks like a great way to ensure your competitors catch up with you and eat your lunch faster. I don't see any answers, just different ways to destroy ourselves.
philomath_mn 2 hours ago [-]
I agree: the incentives to use more and more AI are too strong. We're all stuck in some form of the prisoner's dilemma and the odds that nobody will defect are much too low.

So it seems the most rational position is to embrace the tools and try to ride the wave before the gravy-train is over.

lnenad 24 minutes ago [-]
> Honestly, too many. Software engineers can be really, really dumb. I think it has something to do with assuming they're really smart.

Maybe I am one of the stupid ones but I don't get you people.

This is going to happen whether you want it or not. The data is already out there. Our choice is either learn to use the tool so that we could have that in our arsenal for the future; or grumble in the corner that devs are digging their own graves and cry ourselves to sleep. I'd consider the latter to be stupid.

If you had issues with machines replacing your hands in the industrial age, you had a choice of learning how to operate the machines, I consider this to be a parallel.

nopinsight 3 hours ago [-]
Many of them likely won’t switch immediately. They could also try to keep them with sweet offers, like generous usage quotas, early access to the latest models, etc.

Once sufficient data is gathered, the next generation models will be among the very best at agentic coding, which leads to stronger stickiness, and so on.

dttze 3 hours ago [-]
How is it supposed to learn to automate development by watching us not do things? Which is what the LLMs are used for currently.
falcor84 2 hours ago [-]
Reinforcement learning - with vibe coding, it just needs us to give it the reward signal.
dttze 2 hours ago [-]
So a bunch of people who can't code are going to train it? Or, rather, how will you know it is the right reward? Doesn't seem like a good way to train.
visarga 3 hours ago [-]
> Training LLMs on detailed interaction data from AI-powered IDEs could become a powerful flywheel leading to the automation of practical coding.

I agree. But this is a more general flywheel effect. OpenAI has 500M users generating trillions of interactive tokens per month. Those chat sessions are sequences of interaction, where downstream context can be used to judge prior responses. Basically, in hindsight, you check "has this LLM response been good or bad?", and generate a score. You can expand the window to multiple related chats. So you can leverage extended context and hindsight for judging response quality. Using that data you can finetune a RLHF model, and with it finetune the base model.

But it's not just hindsight analysis. Sometimes users test or implement projects in the real world, and the LLM gets to see idea validation. Other times they elicit tacit experience from humans. That is what I think forms an experience flywheel. LLM being together with humans during problem solving, internalizing approaches, learning from outcomes.

Besides problem solving assistance LLMs are used for counselling/keeping company/therapeutic role. People chat with LLMs to understand and clarify their goals. These are generative teleological models. They are also used by 90% of students if I am to believe a random article.

So the triad of uses for LLMs are: professional problem solving, goal setting/therapy, and learning. All three benefit from the flywheel effect of interacting with millions of people.

fidotron 3 hours ago [-]
This is beginning to look a bit like OpenAI is becoming to startups what Facebook was in the Instagram and WhatsApp era. Back then Facebook were far more established, and mobile was a big catalyst, but the sums being mentioned here are very large.

We should all start building the products that we think will terrify OpenAI most.

paxys 24 minutes ago [-]
Not at all the same, because these startups are ultimately dependent on OpenAI or OpenAI-like model providers to be able to exist. So OpenAI isn't preemptively quashing competitors like Facebook did, rather moving further up and down the chain (chips -> data centers -> foundation models -> fine-tuned models -> AI-powered products) to expand their business.
matchagaucho 3 hours ago [-]
The difference, though, is these AI IDE startups are essentially built on a fork of Microsoft VS Code.

And MSFT has many end game options to dump free IDEs on the market with integrated AI.

indigodaddy 3 hours ago [-]
To followup on this train of thought, I feel like it might be game over for everyone else if Google decides to release their own VS code Cursor-like clone and stays consistent with the incredible context and free tier for Gemini Pro that you currently get from online/webui Gemini and Google AI studio.

The question is do they want to go in that direction? (And also if they do, do they only allow Gemini model or do they open it up to a choice of various models (to also include models not related to Google/Gemini) and/or BYOK). I don't see why not because I believe they will slaughter Cursor, Windsurf, et al if so ...

falcor84 2 hours ago [-]
Google have https://firebase.studio (originally idx.dev), which they recently updated, so I'd assume that they wouldn't want to cannibalize that.
indigodaddy 2 hours ago [-]
Don’t see why they can’t offer a local option as well
fragmede 48 minutes ago [-]
Google spends an inordinate amount of money on client software in the form of Chrome, so maybe they could do an additional something, but looking at how their Google docs offline support is implemented, I don't know that there's appetite for that.

Then again there's also Android Studio, so what do I know. (not a lot)

fidotron 3 hours ago [-]
I think it speaks to the SV bubble that the single most valuable application of their LLM that they can think of would be software development.

One of the oddities of Instagram and WhatsApp is both of them were twists on what the expected formula for user value was at the time. (Retro photos and international SMS replacement respectively).

doctorpangloss 1 hours ago [-]
What is the appeal of VS Code? It's free and the UI looks nice?
tough 1 hours ago [-]
they already started blocking some MSFT extensions on cursor saw another day a thread on hn about it
paxys 19 minutes ago [-]
It's funny that in under a year we went from Sam Altman publicly saying that OpenAI was going to "steamroll" startups that were building products within its blast radius to now offering multiple billion dollars for those same startups.
fpgaminer 2 hours ago [-]
Usability/Performance/etc aside, I get such a sense of magic and wonder with the new Agent mode in VSCode. Watching a little AI actually wander around the code and making decisions on how to accomplish a task. It's so unfathomably cool.
CalmStorm 3 hours ago [-]
I don’t quite understand why OpenAI would pay so much when there’s a solid open-source alternative like Cline. I tried both, and feel that Cline with DeepSeek v3 is comparable to Cursor and more cost-effective.
gscott 3 hours ago [-]
Plus they already have a coding agent. This is beginning to feel like Yahoo 2.0.

When you have just raised $40 billion and you spend $3 billion on a company that has a product that you also build that is dumb as rocks.

KaoruAoiShiho 2 hours ago [-]
That describes Google buying Youtube and Facebook buying WhatsApp, those seem to have turned out okay.
mritchie712 2 hours ago [-]
they get the people too... they are still hiring like crazy.
tristanb 3 hours ago [-]
Oh man - I love Windsurf, but only use Claude in there. This doesn't sound like great news to me.
falcor84 2 hours ago [-]
I mostly use Claude, but have recently been playing with Gemini 2.5 and ChatGPT 4.1, and they've been great too, with slightly different strengths and weaknesses.
2 hours ago [-]
PUSH_AX 2 hours ago [-]
Buying a VSCode fork for billions seems wild to me, but hey.
threecheese 3 hours ago [-]
Interestingly I had an email from Windsurf a few days back suggesting I lock in an early adopter price, as they expected to release new pricing plans and increases.

Related to a potential M&A from OpenAI? I’m less likely to follow their suggestion if they turn around and bake this into OpenAI’s product suite.

baq 3 hours ago [-]
> Related to a potential M&A from OpenAI?

related to locking in revenue, any dollar counts when your multiple is 100

warthog 6 hours ago [-]
i wonder what the price was. If it was around $20 Bn which OAI could have afforded and okay to ask by Cursor guys, tough decision not to sell here.

You are 25 yo and made $2.5 Bn in 4 years

Aurornis 3 hours ago [-]
Rumors from yesterday were around $3 billion.

They had raised a Series C investment recently. Historically that puts ownership of founders and employees at around 40%. Could be a lot higher for a hot AI company though.

Given two founders, AI company, Series C, and a $3bn purchase price they could each have netted around $750 million in a good scenario. Less if they cashed out in secondaries in previous rounds (which would have been smart). Fantastic outcome for them, obviously. This is the 0.001% scenario that founders dream about.

InkCanon 3 hours ago [-]
Strongly suspect OAI can't afford 20B cash. Their latest funding round was 40B, and they're burning through money like it's rice paper. They could offer OAI equity, but Cursor's founders would probably be very suspicious of private valued stock (which is fairy money).

How wise it is to buy Cursor is another question. Current valuation has them at 100x revenue. And I suspect agentic products will be a lot less cash flow positive than traditional SaaS because of the massive cost of all that constant codebase context and stream of code.

sksxihve 2 hours ago [-]
> The initial funding will be $10 billion, followed by the remaining $30 billion by the end of 2025, the person said. But the round comes with a caveat. SoftBank said in an updated disclosure on Monday that its total investment could be slashed to as low as $20 billion if OpenAI doesn’t restructure into a for-profit entity by Dec. 31.

They might not even get the full $40 billion

blitzar 2 hours ago [-]
I would have assumed the same - earlyish stage comanies in this area will likely be happy to take a big wedge of openai stock and a little cash.
lowkey_ 3 hours ago [-]
Fair, but worth noting the founders almost certainly already have gotten fat secondaries, and now have a chance to build something much larger if they can execute.
maille 3 hours ago [-]
Are there any similar solutions for MSVC? Almost all these tools are focused on VSCode.
thefourthchime 3 hours ago [-]
Just open the root folder in cursor and it'll still do all the stuff for you. Just go build it in MSVC. This is how I build apps. I create an empty project in Xcode, and then I go over to Cursor and have it write all the code. And then I go back to Xcode to build and run it.
basisword 3 hours ago [-]
I can see why they’re doing it. If I use Cursor I can pick the best model - I’m not stuck using OpenAI if someone outperforms them. If they own the tools they can takeaway that choice.
demarq 3 hours ago [-]
If OpenAI buys windsurf I’m canceling my subscription immediately!
falcor84 2 hours ago [-]
If I can get Windsurf included in the ChatGPT Plus subscription, that would actually probably keep me from intermittently signing up and abandoning that.
plextoria 3 hours ago [-]
Why?
tcdent 3 hours ago [-]
misdirected combative tendencies guised as free thinking
Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact
Rendered at 21:43:52 GMT+0000 (Coordinated Universal Time) with Vercel.