NHacker Next
  • new
  • past
  • show
  • ask
  • show
  • jobs
  • submit
Google DeepMind Releases AlphaGenome (deepmind.google)
dekhn 2 hours ago [-]
When I went to work at Google in 2008 I immediately advocated for spending significant resources on the biological sciences (this was well before DM started working on biology). I reasoned that Google had the data mangling and ML capabilities required to demonstrate world-leading results (and hopefully guide the way so other biologists could reproduce their techniques). We made some progress- we used exacycle to demonstrate some exciting results in protein folding and design, and later launched Cloud Genomics to store and process large datasets for analytics.

I parted ways with Google a while ago (sundar is a really uninspiring leader), and was never able to transfer into DeepMind, but I have to say that they are executing on my goals far better than I ever could have. It's nice to see ideas that I had germinating for decades finally playing out, and I hope these advances lead to great discoveries in biology.

It will take some time for the community to absorb this most recent work. I skimmed the paper and it's a monster, there's just so much going on.

deepdarkforest 51 minutes ago [-]
> Sundar is a really uninspiring leader

I understand, but he made google a cash machine. Last quarter BEFORE he was CEO in 2015, google made a quarterly profit of around 3B. Q1 2025 was 35B. a 10x profit growth at this scale well, its unprecedented, the numbers are inspiring themselves, that's his job. He made mistakes sure, but he stuck to google's big gun, ads, and it paid off. The transition to AI started late but gemini is super competitive overall. Deepmind has been doing great as well.

Sundar is not a hypeman like Sam or Cook, but he delivers. He is very underrated imo.

modeless 39 minutes ago [-]
Like Ballmer, he was set up for success by his predecessor(s), and didn't derail strong growth in existing businesses but made huge fumbles elsewhere. The question is, who is Google's Satya Nadella? Demis?
bitpush 31 minutes ago [-]
Since we're on the topic of Microsoft, I'm sure you'd agree that Satya has done a phenomenal job. If you look objectively, what is Satya's accomplishments? One word - Azure. Azure is #2, behind AWS because Satya's effective and strategic decisions. But that's it. The "vibes" for Microsoft has changed, but MS hasnt innovated at all.

Satya looked like a genius last year with OpenAI partnership, but it is becoming increasingly clear that MS has no strategy. Nobody is using Github Copilot (pioneer) or MS Copilot (a joke). They dont have any foundational models, nor a consumer product. Bing is still.. bing, and has barely gained any market share.

modeless 28 minutes ago [-]
Microsoft has become a lot more friendly to open source under Satya. VSCode, GitHub, and WSL happened during his tenure, and probably wouldn't have happened under Ballmer. Turning the ship from a focus on protecting platform lock-in to meeting developers where they are is a huge accomplishment IMO.
SV_BubbleTime 17 minutes ago [-]
I like that you are writing as a defense of Google and Sundar.
CuriouslyC 38 minutes ago [-]
He delivered revenue growth by enshittifying Goog's products. Gemini is catching up because Demis is a boss and TPUs are a real competitive advantage.
bitpush 30 minutes ago [-]
You either attribute both good and bad things to the CEO, or dont. If enshittifying is CEO's fault, then so is Gemini's success.
fwip 18 minutes ago [-]
Why? We've all seen organizations in which some things happen because of the CEO, and others happen in spite of them.
jama211 10 minutes ago [-]
But you don’t just get to pick which is which willy nilly just to push your opinions
fwip 2 minutes ago [-]
Right, of course, but I don't see any evidence from which to assume that they're picking "willy nilly."
agumonkey 32 minutes ago [-]
Their brand is almost cooked though. At least the legacy search part. Maybe they'll morph into AI center of the future, but "Google" has been washed away.
bitpush 28 minutes ago [-]
World is much.. much bigger than HN bubble. Last year, we were all so convinced that Microsoft had it all figured out, and now look at them. Billion is a very, very large number, and sometimes you fail to appreciate how big that is.
agumonkey 23 minutes ago [-]
Oh I'm conveying opinions other than mines, tech people I work with, that are very very removed from the HN mindset actually, were shitting on google search for a long time this week.
lukan 12 minutes ago [-]
Google ads are still everywhere, if you google or not.

The question will be, when and how will the LLM's be attacked with product placements.

Open marked advertisement in premium models and integrated ads in free tier ones?

I still hope for a mostly adfree world, but in reality google seems in a good position now for the transition towards AI (with ads).

tiahura 13 minutes ago [-]
Maybe they'll morph into AI center of the future

Haven't you been watching the headlines here on HN? The volume of major high-quality Google AI releases has been almost shocking.

And, they've got the best data.

oceanplexian 12 minutes ago [-]
> The transition to AI started late but gemini is super competitive overall.

If by competitive you mean "We spent $75 Billion dollars and now have a middle of the pack model somewhere between Anthropic and Chinese startup", that's a generous way to put it.

deepdarkforest 6 minutes ago [-]
By competitive, i mean no.1 in LM arena overall, in webdev, in image gen, in grounding etc. Plus, leading the chatbot arena ELO. Flash is the most used model in openrouter this month as well. Gemma models are leading on device stats as well. So yes, competitive
spankalee 38 minutes ago [-]
Did you ride the Santa Cruz shuttle, by any chance? We might have had conversations about this a long while ago. It sounded so exciting then, and still does with AlphaGenome.
bitpush 1 hours ago [-]
> It's nice to see ideas that I had germinating for decades finally playing out

I'm sure you're a smart person, and probably had super novel ideas but your reply comes across as super arrogant / pretentious. Most of us have ideas, even impressive ones (here's an example - lets use LLMs to solve world hunger & poverty, and loneliness & fix capitalism), but it'd be odd to go and say "Finally! My ideas are finally getting the attention".

dvaun 1 hours ago [-]
A charitable view is that they intended "ideas that I had germinating for decades" to be from their own perspective, and not necessarily spurred inside Google by their initiative. I think that what they stated prior to this conflated the two, so it may come across as bragging. I don't think they were trying to brag.
alfanick 49 minutes ago [-]
I don't find it rude or pretentious. Sometimes it's really hard to express yourself in hmm acceptable neutral way when you worked on truly cool stuff. It may look like bragging, but that's probably not the intention. I often face this myself, especially when talking to non-tech people - how the heck do I explain what I work on without giving a primer on computer science!? Often "whenever you visit any website, it eventually uses my code" is good enough answer (worked on aws ec2 hypervisor, and well, whenever you visit any website, some dependency of it eventually hits aws ec2)
camjw 33 minutes ago [-]
100% but in this case they uh… didn’t work on it, it seems?
CGMthrowaway 1 hours ago [-]
Yeah it comes off as braggy, but it’s only natural to be proud of your foresight
1 hours ago [-]
shadowgovt 1 hours ago [-]
FWIW, I interpreted more as "This is something I wanted to see happen, and I'm glad to see it happening even if I'm not involved in it."
dekhn 58 minutes ago [-]
That's correct. I can't even really take credit for any of the really nice work, as much as I wish I could!
plemer 1 hours ago [-]
Could be either. Nevertheless, while tone is tricky in text, the writer is responsible for relieving ambiguity.
spongebobstoes 42 minutes ago [-]
eliminating ambiguity is impossible. the reader should work to find the strongest interpretation of the writer's words
coderatlarge 11 minutes ago [-]
that’s a lot to expect of readers… good writing needs to give readers every opportunity to find the good in it.
shadowgovt 7 minutes ago [-]
It is a lot to expect of readers... It's also explicitly asked of us in this forum. https://news.ycombinator.com/newsguidelines.html. "Please respond to the strongest plausible interpretation of what someone says, not a weaker one that's easier to criticize. Assume good faith."
perching_aix 39 minutes ago [-]
It's also natural language though, one can find however much ambiguity in there as they can inject. It hasn't for a single moment come across as pretentious to me for example.

Think of all the tiresome Twitter discussions that went like "I like bagels -> oh, so you hate croissants?".

pinoy420 1 hours ago [-]
[dead]
varelse 18 minutes ago [-]
[dead]
Scaevolus 2 hours ago [-]
Naturally, the (AI-generated?) hero image doesn't properly render the major and minor grooves. :-)
solarwindy 44 minutes ago [-]
jeffbee 2 hours ago [-]
And yet still manages to be 4MB over the wire.
smokel 30 minutes ago [-]
That's only on high-resolution screens. On lower resolution screens it can go as low as 178,820 bytes. Amazing.
nextos 2 hours ago [-]
I found it disappointing that they ignored one of the biggest problems in the field, i.e. distinguishing between causal and non-causal variants among highly correlated DNA loci. In genetics jargon, this is called fine mapping. Perhaps, this is something for the next version, but it is really important to design effective drugs that target key regulatory regions.

One interesting example of such a problem and why it is important to solve it was recently published in Nature and has led to interesting drug candidates for modulating macrophage function in autoimmunity: https://www.nature.com/articles/s41586-024-07501-1

rattlesnakedave 2 hours ago [-]
Does this get us closer? Pretty uninformed but seems that better functional predictions make it easier to pick out which variants actually matter versus the ones just along for the ride. Step 2 probably is integrating this with proper statistical fine mapping methods?
nextos 2 hours ago [-]
Yes, but it's not dramatically different from what is out there already.

There is a concerning gap between prediction and causality. In problems, like this one, where lots of variables are highly correlated, prediction methods that only have an implicit notion of causality don't perform well.

Right now, SOTA seems to use huge population data to infer causality within each linkage block of interest in the genome. These types of methods are quite close to Pearl's notion of causal graphs.

ejstronge 2 hours ago [-]
> SOTA seems to use huge population data to infer causality within each linkage block of interest in the genome.

This has existed for at least a decade, maybe two.

> There is a concerning gap between prediction and causality.

Which can be bridged with protein prediction (alphafold) and non-coding regulatory predictions (alphagenome) amongst all the other tools that exist.

What is it that does not exist that you "found it disappointing that they ignored"?

nextos 1 hours ago [-]
> This has existed for at least a decade, maybe two.

Methods have evolved a lot in a decade.

Note how AlphaGenome prediction at 1 bp resolution for CAGE is poor. Just Pearson r = 0.49. CAGE is very often used to pinpoint causal regulatory variants.

seydor 1 hours ago [-]
this is such an interesting problem. Imagine expanding the input size to 3.2Gbp, the size of human genome. I wonder if previously unimaginable interactions would occur. Also interesting how everything revolves around U-nets and transformers these days.
teaearlgraycold 47 minutes ago [-]
> Also interesting how everything revolves around U-nets and transformers these days.

To a man with a hammer…

SV_BubbleTime 15 minutes ago [-]
Soon we’ll be able to get the whole genome up on the blockchain.
mountainriver 46 minutes ago [-]
With the huge jump in RNA prediction seems like it could be a boon for the wave of mRNA labs
iandanforth 38 minutes ago [-]
Those outside the US at least ...
Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact
Rendered at 19:14:28 GMT+0000 (Coordinated Universal Time) with Vercel.