Tag Archives: display

Display thoughts

For this photo, I tried to mirror the image quality of the display as closely as possible.

For this photo, I tried to mirror the image quality of the display as closely as possible.

I’ve been thinking about pulling the trigger on a new display. Not because there’s anything wrong with my current one, but after the kerfuffle that was made by Dota 2 players at the Shanghai Majors over not having 120Hz monitors to compete on, I figured I wanted to see what all the fuss is about.

(There’s also the vain hope that it will somehow improve my game by a few percentage points, but that’s a story for another time.)

A little back story: since December 2014 I’ve been running with a Dell P2715Q, a 27-inch, 60Hz, 3840×2160 IPS display that was a substantial upgrade from the U2711 display I had previously. It’s pretty nice, with a few caveats: since my primary usage is with the display attached to my MacBook Pro, running it a native res means things get pretty unreadable unless I’m pumping up the size of everything. It’s fantastic when using a scaled resolution (I use a tool called EasyRes to switch between resolutions quickly), as it gives the quality of a “Retina” 2560×1440 display (3840×2160 downscaled to half that), making everything as crisp as the freshest iceberg lettuce.

But I don’t usually use it at native res, because things tend to slow down a bit, and the fans are audible all the time. I bought the best graphics card that Apple offered at the time, so maybe the Oculus CEO has a point when he says he’ll offer VR on the Mac when Apple decide to put a powerful enough GPU in their machines. (Stringent heat and power requirements mean that probably won’t happen in the MacBook Pro lineup anytime soon, as much as it pains me to say that.)

So I run my wonderful series of pixels at a non-Retina 2560×1440 when plugged into my Mac, even though text looks worse that way, and I have no more screen real-estate than I did with my previous screen.

My PC is a different story entirely. I like to think I have a pretty great graphics card in the GTX 980, which lets me run whatever resolution I like a a near-constant 60 FPS. And because I hardly play anything other than Dota, which runs on the Source 2 engine, it means I can run that game at the native res of my monitor without getting any noticeable frame-rate drops. Newer games like Dragon Age Inquisition, Fallout 4, or The Division are more of a toss up – I can either choose between maxing all the settings at a lower resolution, or turning down the fanciness for more resolution, and what’s “better” mostly depends on the game.

Continue Reading →

A Little Bias

For the past few months now, I’ve been experimenting with something called bias lighting for my computer displays. All the cool kids are doing it, so I thought I would do the same.

Now it’s gotta be said that I spend what probably amounts to an unhealthy amount in time on front of LCD displays, if I’m not looking at my two LCDs on my desk, I’m staring at my iPhone on the bus, in the street, in the car, wherever.

The vast majority of my time, though, is spent in front of my displays at home: a decently-sized Dell 27-incher, and the 15-inch LCD of my MacBook Pro. They’re not the best match-up size wise, but going back to a single display when I’ve been using two for the majority of my computing life would be painful. There was a period where I went back to one due to reading something about single-displays being more productive. Needless to say, that experiment didn’t last very long — but I digress.

The theory behind bias lighting is that it’ll increase the perceived contrast of the display, as well as relieving eye-strain. It has a few other effects as well but those two are the main ones I’m really interested in, particularly as the lights in my room stay off for the most part (yes, my LCD tan is working out very well, thank you).

So I guess the question you’ve all been waiting for: how well does bias lighting work in practice?

The answer? I’m not exactly sure. Like I said, I’ve been using it for a couple of months now, and there’s definitely no discernible difference. Perhaps my piddly little 6-LED strips aren’t bright enough to have an impact on my gargantuan 27-inch display, perhaps I’m sitting too close to the monitor for them to make any kind of a difference, or perhaps I was expecting too much out of bias lighting in the first place.

Perhaps I’ll notice a difference when I turn them off for a month or so – but that’s for another time.

Dell SP2309W — 2048×1152 what now?

I spose the iphone4 would be a good subjective test of screen tech like this – Cramming relatively big res into tiny screens.

Er, no, no it wouldn’t.

Back story: there’s a pretty nice screen on that Dell makes. It’s the SP2309W, and for $279 you get a 23″ TFT Dell monitor that does 2048×1152, higher than high definition (but still at a ratio of 16:9).

I pointed out this monitor to a couple of my friends, and one made the comment you see above (along with something about a weird resolution for a computer monitor).

Before I continue I’d like to point out that most of this is a re-hash (albeit a pretty poor one) of Dustin Curtis’ thoughts on the issue — I’d suggest you go read his blog first, and then come back here when you’re done.

And that’s exactly where he’s wrong. It’s not like the iPhone 4, because while the iPhone 4 crams a relatively big res into a smallish screen, it does so in a way that doesn’t affect the size of on-screen elements.

Traditionally, what happens is that as pixel density gets higher, user interface elements get smaller. It’s got something to do with how large any specific UI element actually is, and how text has been traditionally rendered.

Over at his blog, Dustin explains:

This means that if you draw the letter “a” in 12pt Helvetica on any screen, it will take up exactly 8×9 pixels (almost all the time). As you increase the number of pixels on the whole display, the number of pixels that it takes to draw the letter “a” in 12pt Helvetica stays the same, the letter just becomes smaller.

More pixels crammed into a smaller space (that is, a higher pixel density), results in things becoming smaller. If you think about it, it makes sense — say you’ve got an image that’s 512×512, the size of an typical Mac OSX application icon. If your screen displays that at, say, 100ppi, it’ll appear to have certain dimensions on the screen if you chose to measure it with a ruler. Measure that same icon on a 130ppi screen, and it’ll appear smaller. Not because it’s lost any pixels, but because those same pixels have been jammed into a smaller space.

Then you hit the iPhone 4. It’s not quite resolution independence*, but what Apple have done works pretty well. Instead of using the same graphics resources as the iPhone 2G/3G/3GS, developers are encouraged to develop “retina-optimised” graphics — that is, graphics at double the resolution of their previous-generation iPhone counterparts. Why? Because such graphics will increase interface definition.

If you take that same icon that we had in above example, and instead of just scaling it up or down to suit different resolutions, what you can actually do is create a whole new version of that icon so that it displays at the same physical size — regardless of which screen you display it on. Obviously the icon will look vastly improved on a higher resolution display compared to the lower resolution one, but that’s only because we’re increasing image density alongside pixel density.

Dustin, again, sums it up best:

This means that when iOS scales the elements in physical size to fit the 3.5-inch iPhone 4 screen, they take up the same amount of space as the elements drawn on the iPhone 3GS but they use four times the number of pixels.

Four times the number of pixels, represented in the same physical space = incredible user interface definition.

If that’s not mind-blowingly awesome, I’m not sure what is.

The whole “retina display” mentality of the iPhone is not about representing more things in the same space —  it’s about showing the same stuff, just at a better quality. Contrast this to the display above — because whatever you use on that display (Windows, or Mac) isn’t resolution independent (Mac OSX is to a degree), things will appear smaller, and that’s just how the cookie crumbles.

* okay, it’s not resolution independence at all. Without getting too technical, Apple are actually using two sets of graphics resources for everything — apparently they found that ahead-of-time resolution independence offered the greatest performance/resource benefit. More reading available here on the matter (thanks, Bjango!).