NHacker Next
  • new
  • past
  • show
  • ask
  • show
  • jobs
  • submit
How much slower is random access, really? (samestep.com)
andersa 2 hours ago [-]
Note this is not true random access in the manner it occurs in most programs. By having a contiguous array of indices to look at, that array can be prefetched as it goes, and speculative execution will take care of loading many upcoming indices of the target array in parallel.

A more interesting example might be if each slot in the target array has the next index to go to in addition to the value, then you will introduce a dependency chain preventing this from happening.

jiggawatts 41 minutes ago [-]
This is why array random access and linked-list random access have wildly different performance characteristics.

Another thing I noticed is that the spike on the left hand side of his graphs is the overhead of file access.

Without this overhead, small array random access should have a lot better per-element cost.

porcoda 52 minutes ago [-]
The RandomAccess (or GUPS) benchmark (see: https://ieeexplore.ieee.org/document/4100365) was looking at measuring machines on this kind of workload. In high performance computing this was important for graph calculations and was one of the things the Cray (formerly Tera) MTA machine was particularly good at. I suppose this benchmark wouldn’t be very widely known outside HPC circles.
1 hours ago [-]
Adhyyan1252 2 hours ago [-]
Love this analysis! Was expecting random to be much slower. 4x is not bad at all
forrestthewoods 2 hours ago [-]
Here’s an older blog post of mine on roughly the same topic:

https://www.forrestthewoods.com/blog/memory-bandwidth-napkin...

I’m not sure I agree with the data presentation format. “time per element” doesn’t seem like the right metric.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact
Rendered at 00:47:58 GMT+0000 (Coordinated Universal Time) with Vercel.