X originally was created on/ran on a graphics terminal - the DEC VAXstation 100. The VS100 was quite different to the later X thin client terminals: it required an adapter card to be installed in a host system, and the software which ran on the VS100 could directly access a chunk of shared memory on the host.
Ports to workstations with inbuilt graphics hardware came later.
For anyone just reading the title: It's about physical thin-client X11 server machines, not xterm.
amiga386 49 minutes ago [-]
I was driving myself mad; xterm was released in 1984 and it didn't really matter there was no XDM because there because you merely needed a window manager to tile your xterm windows...
But sure, the definition of "X terminal" here is meant to mean dedicated hardware that runs an X server connecting to a remote X11 display manager, and nothing else. Those were always somewhat niche, in the same way that once terminal emulators existed, general purpose PCs displaced hardware terminals.
In the 1990s, my university used inexpensive diskless X86 PCs running X386 (predecessor of XFree86) with just a ramdisk, booted by DHCP / BOOTP / TFTP.
lproven 2 hours ago [-]
I feel like I aged a decade reading that. You're not wrong but it's an interpretation that didn't even cross my mind. :-(
cbm-vic-20 1 hours ago [-]
I recall in the early 90s, X Terminals were useful for accessing applications that were licensed per-machine, or were only available on expensive hardware. X Terminals let users use those applications from anywhere on campus. Very convenient!
black3r 44 minutes ago [-]
In my university we were doing this with Matlab in 2015...
somat 1 hours ago [-]
The tricky thing about justifying an X terminal is that it requires a nice graphics system and probably a nice cpu to drive that graphics system as well, so really the only thing you don't need is storage. basically it is hard to save money because you are buying most of a nice computer anyway.
35 minutes ago [-]
msgodel 31 minutes ago [-]
It's similar to the issue plan9 terminals have. As long as you have a CPU with an MMU and some RAM (which you need a fair amount of for the graphics anyway) you might as well just run the software locally. All the peripherals are relatively cheap.
msh 32 minutes ago [-]
The Sun Ray terminals they used at my university back in the early 2000s was very nice.
nothingneko 35 minutes ago [-]
wouldn’t you just need enough to render a window? i’m not sure if everything is sent pre-rendered or not
somat 22 minutes ago [-]
Think early 90's computers, and everything required to run a X server well. lots of memory, nice graphics, a nice cpu to move those graphics around. despite being technically thin clients, Dedicated X servers were not cheap.
It is sort of like the anecdote about an early sys-admin who traced down a problem with the new department laser printer locking up for hours to one engineer who had to be told to knock it off when he explained that he was printing nothing, But the printer had, by far, the most powerful CPU in the building so he ported all his simulation programs to postscript and was running them on the printer.
c-linkage 11 minutes ago [-]
Ah yes, the old ray tracer in PostScript.
ggm 7 hours ago [-]
I had to both administer, and operate on the early X terminals from several vendors they were interesting. Labtam made strides developing boxes using the more novel Intel chips and this may have been what they sold on when they got out of the business and moved to being an ISP in Australia.
I enjoyed using blits and the early dec Ultrix workstations.
Thin X terminals were super cool. But, also really stressed out your Ethernet, and because we didn't have good audio models in X at that time, when multimedia became viable they stopped being as useful. But for a distraction free multiple term, low overhead wm world... super good price performance cost.
wkat4242 6 hours ago [-]
I was surprised how a room of top notch 1280x1024 terminals was able to function so well on a shared 10mbps with pretty bad collision detection to boot. X apps of the day were super optimised for local drawing. Even games were super smooth. Toolkits like Motif were all draw calls. By the way back then we thought Motif was bloated lol :)
And then... Came the internet. People suddenly started running NCSA Mosaic in droves that bogged down the single core server. And those browsers started to push lots of bitmap stuff through the pipe to the terminals. Now that was bad, yes. When Netscape came with its image backgrounds and even heavier process people started moving away to the PC rooms :( Because all scroll content needed to be bitstreamed then.
Ps video content at that time wasn't even a thing yet. That came a bit later with realvideo first.
But there was a time when X terminals were more than sufficient, probably for a decade or so.
bmacho 5 hours ago [-]
> Because all scroll content needed to be bitstreamed then.
Is it better now? Can a browser locally scroll an image, without restreaming it?
londons_explore 3 hours ago [-]
A modern browser (ie. chromium) uses the GPU for all drawing.
The basic idea is that HTML content is drawn in transparent 'tiles' which are layered on top of one another. When the user scrolls, the tiles don't need to be redrawn, but instead just re-composited at their new positions. GPU's are super fast at that, and even a 15 year old GPU can easily do this for tens of layers at 60 FPS.
On a linux with a remote X server, I think the tiles would all end up on the X server, with only the pretty small 'draw tile number 22 at this location' going across the network. So the answer to your question is 'yes'.
kristianp 5 hours ago [-]
I remember using xterms to do assignments in modula 2 in about 1993. They were 1 bit screens, I think they were square 1024x1024. Very high resolution for the time.
dfox 5 hours ago [-]
As for NCD X terminals (at least the later ones), surprising amount of stuff could run directly on the terminal (which ran some weird MMU-less BSD variant): mwm and motif session manager, dtterm-like terminal with telnet and serial port support, some kind of JVM and two different variants of mosaic were part of the SW package (it booted either from flash PC card or from NFS).
pjmlp 4 hours ago [-]
At my university we had a couple of X terminals from IBM, connecting into DG/UX, and I can certaily vouch that for early 1990's they weren't that cheap to acquire.
If memory serves me right, we had four of them on the student lab.
Everyone else could enjoy connecting to DG/UX via terminal app on Windows for Workgroups, or the older green and ambar text based phosphor terminals.
As anecdote, those big screen X terminals were quite often used to have four parallel sessions of mixes using talk and some MUD game.
bluGill 2 hours ago [-]
They were cheap compared to the cost of the workstation they connected to. But nobody would call them cheap even if you look at the price today without adjusting for inflation.
pantulis 4 hours ago [-]
We had like 4 Tektronix X Terminals that could connect to Sun workstations for those fortunate to have accounts, the rest was using VT terminals to a VAX. And yes, the talk and MUD use cases where popular ;)
beej71 9 hours ago [-]
The good old days. We had a bunch of X terminals hooked up with thin net to some HP735 servers in college.
HenryBemis 8 hours ago [-]
In those good old days my Uni was giving away those bulky Unix "manuals" (after every major upgrade they were refreshing the documentation/dossiers) and they would leave on a table a few dozens of the 'outdated' ones. Everyone would grab one and it was a first-come-first-served, and you could end up in a 'useless' dossier, but still they were amazing reads.
arethuza 50 minutes ago [-]
One of the nice things when getting a new Sun workstation back in the day (say 1990 or so) was getting vast amounts of excellent printed documentation and folders into which they had to be clipped. Sun even used to supply proper books (e.g. the PostScript books) with OpenWindows to cover NeWS...
bluGill 2 hours ago [-]
I miss the days of useful manuals. They were hard and expensive to write, but they had a wealth of technical information that is often impossible to find today.
aa-jv 6 hours ago [-]
For most of the latter part of the 80's, I used Quarterdeck Desqview as my 'terminal', which allowed me to have 4 independent concurrent MSDOS sessions running on my 386, each of which with its own video and network connectivity, so that I could telnet into my MIPS Magnum pizzabox and do some work.
At the beginning of the 90p's, I was on the hunt for an alternative to the MSDOS part when, eventually, I tried minix instead .. and that led to replacing it with Linux as soon as it was available on funet. Multiple runs to Fry's to get more RAM and some CPU upgrades later, and I was soon compiling an X/Windows setup on my brand new 486 with 16 Megabytes of RAM .. and about a week after that, I replaced my Quarterdeck setup with a functioning Linux workstation, thorns and warts and all. That was a nice kick in the pants of the operators who were threatening to take away my pizzabox, but it was short-lived joy, as not long thereafter I was able to afford an Indy, which served great for the purpose all through the 90's - and my Linux systems were relegated off the desktop to function as 'servers', once more.
But I always wondered about Quarterdecks' Desqview/X variant, and whether that would have been an alternative solution to the multi-term problem. It seems to me that this was available in 1987/88, which is odd given the articles' claims that X workstations weren't really widespread around that period.
rjsw 5 hours ago [-]
I ran my own port of X11 on top of Interactive Systems 386/ix running on a 386 in 1987/88.
aa-jv 3 hours ago [-]
Nice, I remember poking at that a few times but never being able to justify a license purchase to my boss, who believed that I had everything I needed in the form of the Magnum pizzabox, and why would anyone need a UI for programming, lol ..
lproven 38 minutes ago [-]
> But I always wondered about Quarterdecks' Desqview/X variant
Dv/X was remarkable tech, and if it had shipped earlier could have changed the course of the industry. Sadly, it came too late.
> It seems to me that this was available in 1987/88,
No. That is roughly when I entered the computer industry. Dv/X was rumoured then, but the state of the art was OS/2 1.1, released late 1988 and the first version of OS/2 with a GUI.
1992. That's the same year as Windows 3.1, but critically, Windows 3.0 was in 1990, 2 years earlier.
Windows 3.0 was a result of the flop of OS/2 1.x.
OS/2 1.x was a new 16-bit multitasking networking kernel -- but that meant new drivers.
MS discarded the radical new OS, it discarded networking completely (until later), and moved the multitasking into the GUI layer, allowing Win3 to run on top of the single-tasking MS-DOS kernel. That meant excellent compatibility: it ran on almost anything, can it could run almost all DOS apps, and multitask them. And thanks to a brilliant skunkworks project, mostly by one man, David Weise, assisted by Murray Sargent, it combined 3 separate products (Windows 2, Windows/286 and Windows/386) into a single product that ran on all 3 types of PC and took good advantage of all of them. I wrote about its development here:
https://www.theregister.com/2025/01/18/how_windows_got_to_v3...
It also did bring in some of the GUI design from OS/2 1.1, mainly from 1.2, and 1.3 -- the Program Manager and File Manager UI, the proportional fonts, the fake-3D controls, some of the Control Panel, and so on. It kept the best user-facing parts and threw away the fancy invisible stuff underneath which was problematic.
Result: smash hit, redefined the PC market, and when Dv/X arrived it was doomed: too late, same as OS/2 2.0, which came out the same year as Dv/X.
If Dv/X had come out in the late 1980s, before Windows 3, it could have changed the way the PC industry went.
Dv/X combined the good bits of DOS, 386 memory management and multitasking, Unix networking and Unix GUIs into an interesting value proposition: network your DOS PCs with Unix boxes over Unix standards, get remote access to powerful Unix apps, and if vendors wanted, it enabled ports of Unix apps to this new multitasking networked DOS.
In the '80s that could have been a contender. Soon afterwards it was followed by Linux and the BSDs, which made that Unix stuff free and ran on the same kit. That would have been a great combination -- Dv/X PCs talking to BSD or Linux servers, when those Unix boxes didn't really have useful GUIs yet.
Windows 3 offered a different deal: it combined the good bits of DOS, OS/2 1.x's GUI, and Windows 2.x into a whole that ran on anything and could run old DOS apps and new GUI apps, side by side.
Networking didn't follow until Windows for Workgroups which followed Windows 3.1. Only businesses wanted that, so MS postponed it. Good move.
bitwize 1 hours ago [-]
One of HP's first X terminals ran on a 186. The same beleaguered 16-bit CPU that made the Tandy 2000 go, also powered X terminals in the early 90s (albeit at twice the speed).
For all its "bloat", X could support a very sophisticated GUI -- over the network -- on very limited hardware by the standards of 30 years ago, let alone today.
TMWNN 8 hours ago [-]
I presume that X terminals did not appear at the same time as X Window because Project Athena <https://en.wikipedia.org/wiki/Project_Athena>, which created X, had its users use "real" workstations from the start, the IBM RT PC being the first. I don't know if MIT ever deployed any X terminals but, as I understand it, one of the tenets of Athena is that every workstation is a full-fledged remote login-capable node of the Athena cluster.
bediger4000 54 minutes ago [-]
Project Athena is not given enough credit.
anthk 6 hours ago [-]
The Linux Gazzete had several articles on that, one of them from Andorra.
Ports to workstations with inbuilt graphics hardware came later.
References:
[1] https://www.youtube.com/watch?v=cj02_UeUnGQ
[2] https://en.wikipedia.org/wiki/VAXstation#VAXstation_100
But sure, the definition of "X terminal" here is meant to mean dedicated hardware that runs an X server connecting to a remote X11 display manager, and nothing else. Those were always somewhat niche, in the same way that once terminal emulators existed, general purpose PCs displaced hardware terminals.
In the 1990s, my university used inexpensive diskless X86 PCs running X386 (predecessor of XFree86) with just a ramdisk, booted by DHCP / BOOTP / TFTP.
It is sort of like the anecdote about an early sys-admin who traced down a problem with the new department laser printer locking up for hours to one engineer who had to be told to knock it off when he explained that he was printing nothing, But the printer had, by far, the most powerful CPU in the building so he ported all his simulation programs to postscript and was running them on the printer.
I enjoyed using blits and the early dec Ultrix workstations.
Thin X terminals were super cool. But, also really stressed out your Ethernet, and because we didn't have good audio models in X at that time, when multimedia became viable they stopped being as useful. But for a distraction free multiple term, low overhead wm world... super good price performance cost.
And then... Came the internet. People suddenly started running NCSA Mosaic in droves that bogged down the single core server. And those browsers started to push lots of bitmap stuff through the pipe to the terminals. Now that was bad, yes. When Netscape came with its image backgrounds and even heavier process people started moving away to the PC rooms :( Because all scroll content needed to be bitstreamed then.
Ps video content at that time wasn't even a thing yet. That came a bit later with realvideo first.
But there was a time when X terminals were more than sufficient, probably for a decade or so.
Is it better now? Can a browser locally scroll an image, without restreaming it?
Here is an awesome (slightly outdated) talk about the architecture: https://groups.google.com/a/chromium.org/g/blink-dev/c/AK_rw...
The basic idea is that HTML content is drawn in transparent 'tiles' which are layered on top of one another. When the user scrolls, the tiles don't need to be redrawn, but instead just re-composited at their new positions. GPU's are super fast at that, and even a 15 year old GPU can easily do this for tens of layers at 60 FPS.
On a linux with a remote X server, I think the tiles would all end up on the X server, with only the pretty small 'draw tile number 22 at this location' going across the network. So the answer to your question is 'yes'.
If memory serves me right, we had four of them on the student lab.
Everyone else could enjoy connecting to DG/UX via terminal app on Windows for Workgroups, or the older green and ambar text based phosphor terminals.
As anecdote, those big screen X terminals were quite often used to have four parallel sessions of mixes using talk and some MUD game.
At the beginning of the 90p's, I was on the hunt for an alternative to the MSDOS part when, eventually, I tried minix instead .. and that led to replacing it with Linux as soon as it was available on funet. Multiple runs to Fry's to get more RAM and some CPU upgrades later, and I was soon compiling an X/Windows setup on my brand new 486 with 16 Megabytes of RAM .. and about a week after that, I replaced my Quarterdeck setup with a functioning Linux workstation, thorns and warts and all. That was a nice kick in the pants of the operators who were threatening to take away my pizzabox, but it was short-lived joy, as not long thereafter I was able to afford an Indy, which served great for the purpose all through the 90's - and my Linux systems were relegated off the desktop to function as 'servers', once more.
But I always wondered about Quarterdecks' Desqview/X variant, and whether that would have been an alternative solution to the multi-term problem. It seems to me that this was available in 1987/88, which is odd given the articles' claims that X workstations weren't really widespread around that period.
Dv/X was remarkable tech, and if it had shipped earlier could have changed the course of the industry. Sadly, it came too late.
> It seems to me that this was available in 1987/88,
No. That is roughly when I entered the computer industry. Dv/X was rumoured then, but the state of the art was OS/2 1.1, released late 1988 and the first version of OS/2 with a GUI.
Dv/X was not released until about 5Y later:
https://winworldpc.com/product/desqview/desqview-x-1x
1992. That's the same year as Windows 3.1, but critically, Windows 3.0 was in 1990, 2 years earlier.
Windows 3.0 was a result of the flop of OS/2 1.x.
OS/2 1.x was a new 16-bit multitasking networking kernel -- but that meant new drivers.
MS discarded the radical new OS, it discarded networking completely (until later), and moved the multitasking into the GUI layer, allowing Win3 to run on top of the single-tasking MS-DOS kernel. That meant excellent compatibility: it ran on almost anything, can it could run almost all DOS apps, and multitask them. And thanks to a brilliant skunkworks project, mostly by one man, David Weise, assisted by Murray Sargent, it combined 3 separate products (Windows 2, Windows/286 and Windows/386) into a single product that ran on all 3 types of PC and took good advantage of all of them. I wrote about its development here: https://www.theregister.com/2025/01/18/how_windows_got_to_v3...
It also did bring in some of the GUI design from OS/2 1.1, mainly from 1.2, and 1.3 -- the Program Manager and File Manager UI, the proportional fonts, the fake-3D controls, some of the Control Panel, and so on. It kept the best user-facing parts and threw away the fancy invisible stuff underneath which was problematic.
Result: smash hit, redefined the PC market, and when Dv/X arrived it was doomed: too late, same as OS/2 2.0, which came out the same year as Dv/X.
If Dv/X had come out in the late 1980s, before Windows 3, it could have changed the way the PC industry went.
Dv/X combined the good bits of DOS, 386 memory management and multitasking, Unix networking and Unix GUIs into an interesting value proposition: network your DOS PCs with Unix boxes over Unix standards, get remote access to powerful Unix apps, and if vendors wanted, it enabled ports of Unix apps to this new multitasking networked DOS.
In the '80s that could have been a contender. Soon afterwards it was followed by Linux and the BSDs, which made that Unix stuff free and ran on the same kit. That would have been a great combination -- Dv/X PCs talking to BSD or Linux servers, when those Unix boxes didn't really have useful GUIs yet.
Windows 3 offered a different deal: it combined the good bits of DOS, OS/2 1.x's GUI, and Windows 2.x into a whole that ran on anything and could run old DOS apps and new GUI apps, side by side.
Networking didn't follow until Windows for Workgroups which followed Windows 3.1. Only businesses wanted that, so MS postponed it. Good move.
For all its "bloat", X could support a very sophisticated GUI -- over the network -- on very limited hardware by the standards of 30 years ago, let alone today.
Great times.
https://linuxgazette.net/issue45/ward/ward.html