There are some new benchmarks that imply Intel’s new CPU (Sandy Bridge) may actually match dedicated GPUs at video transcoding.
Intel today showed for the first time special new CPU instructions in action optimized for this task. Encoding performance is a tricky business and all details are not available yet, but what we have so far is Intel looking very good.
The numbers compare standard Core i7 quad-core encoding with NVidia GPU and Sandy Bridge CPU encoding performance.The encoders used were Handbrake (x264) for Core i7, Badaboom for NVidia’s GTX 260 GPU.The Sandy Bridge encoder is not known but Intel maintains its own encoder so it may be a special version optimized for Sandy Bridge.
*Units are seconds of source video, not frames
The source and target video profiles were respectively 1080p and iPhone 480x320.
Grains of Salt
Now for the caveats and uncertainties that must be understood to put these numbers in context. Consider the above numbers rough comparisons for reasons including the following:
The encoder settings and output quality are not known to be equivalent. This can make a massive difference in results.
I did not run the benchmarks myself. The Core i7/Badaboom numbers come from overclockersclub, and the Intel benchmarks were timed by Anand Lal Shimpi on his watch from the audience.
We don’t know the exact clock speed and turbo mode for the Sandy Bridge demo. It could be a mid-range model or a high-end extreme equivalent.
X-Factor: The x264 encoder runs on CPUs only and it is widely considered the best encoder available. On their team are some insane geniuses who seem to be able to optimize almost anything. So if Intel's encoder is fast, the x264 version will likely be even faster.
Caveats not withstanding we are gettting a glimpse of the CPU/GPU video encoding battle shaping up here. For a company like Badaboom the final results could determine their entire viability as a going concern. For NVidia GPU encoding has been the poster child for one of the most mainstream uses of general purpose GPU computing. For Intel it’s about staying relevant for more workloads in a new era of massively parallel computing.
The Verdict
So what do you think, will Badaboom or NVidia be hurt by Sandy Bridge? Or is it too little too late for Intel in the encoding battle?
The final answers will start to unfold when official Sandy Bridge review samples arrive before the end of 2010.
What is the worst part of Chrome's user interface design? There is one candidate I'd like to nominate.
The user interface to change the default font in Google's Chrome browser makes me cringe. To see if you agree, here are the basic steps a user follows to change the font:
1) Wrench = Settings?
To change something in Chrome first requires you figure out that clicking the wrench icon is how to change browser settings. For a computer savvy person ok, I'll buy this, but for many people I know (my parents included) this has already failed. I totally get the reason behind this - Minimalist design is aesthetically pleasing and the lack of clutter on average can provide a better user experience. However even when this philosophy works it's still going to have trade-offs and this case is one of them. Overall I don't begrudge Google the choice of wrench in the spirit of simplicity. It's a trade-off (like all designs require) but even good trade-offs have their negative side.
2) Under the Hood is Scary
Once you get to the Chrome options the next step in changing the default font is to select a tab called "Under the Hood".
Seriously?
It seems a bit ironic that these correlations exist:
Need big font > Have poor vision > Vision degrades with age > The older you are the less likely you want to click on features called Under the Hood
Maybe to change the default font they could make you click "Stick a fork in my eye…" Usability would probably be about the same, but it might add a little variety to the experience.
3) Web Content is Meaningless
Users making it this far will find the font settings to be categorized under "Web Content". At first this might seem reasonable. On second thought almost anything in Chrome could be shoehorned into this category - it's a web browser for the love of...! It's entire purpose is to help you with web content. My only idea on how it could be more over-generalized would be if the category was "Internet Stuff".
4) Eliminate Scrolling by Using Tabs which Require Scrolling
The final step to finding the default font settings is to realize that you must scroll down within the Under the Hood tab before you can see anything related to fonts. The first problem with this is some designers just consider it bad design to scroll too much within tabs because one of the main reasons for the tab metaphor was to reduce the need for scrolling (run-on sentence intentional).
Recapitulation
Changing the font in Chrome fails at least two common usability benchmarks. First, it will often fail the parent test (unless your parents are fairly software savvy). Second, the discoverability is really poor.
Discoverability isn't everything as Scott points out, however it shouldn't suffer any more than is necessary for a balanced experience. Could some of the steps above be made more discoverable without hurting the overall balance of feature priorities? It doesn't seem difficult to do.
Some may say, if you hate Chrome so much just quit using it!
However I don't hate Chrome - in general I respect the work Google has done. To be fair, this is one small feature within a vast body of design and engineering. Regardless of my feelings though I'm compelled to use all browsers - developers are commonly using a variety of browsers for testing, research, etc. This is not likely to change, just like my vision is not likely to improve.
I’ve found that creating water effects that respond interactively and smoothly is not really something easy to do in WPF or Silverlight.However because a project required it I trudged forward managed to get it working pretty nicely – buttery smooth and low CPU utilization.
The results can be seen in the video, but the reasons it wasn’t easy may be interesting to WPF/Silverlight developers so I’ll mention why below. Can you guess what the technical obstacles are?
(user interaction 17 seconds in)
(user interaction 17 seconds in)
Graphics architecture
The first hurdle is that WPF and Silverlight primarily use a “display list” or “retained mode” graphics architecture.This is usually not a bad thing, and allowed developers to break away from the WM_PAINT model which is used in WinForms and Win32 before that.Those older systems encourage an “immediate mode” architecture while WPF and Silverlight encourage a retained mode, but will tolerate immediate mode in certain cases.
WPF and Silverlight
You might have nightmares about WinForms and Win32 and wish all manner of death upon them.Death to immediate mode as well – long live WPF/Silverlight and retained mode graphics!
Just one small problem – immediate mode graphics are well suited to certain things like games and simulations, and no one wants give those up.In fact Direct3d itself is an immediate mode architecture which underlies most graphics in Windows 7 including WPF itself.I’m speaking in general here and there are always exceptions.For example I’ve written simple games and simulations in WPF, but many times it is not an ideal approach.
Animation vs. Simulation
To get more specific on why WPF/Silverlight are not great at simulating water, it’s basically about loops.Many simulations for water and other phenomena run a programming loop over and over again throughout the simulation.The reason is that future states of the simulation depend on the output of previous states of the simulation.The problem is that animations are different than simulations.
In WPF you can easily animate a ball along a flat line. WPF determines the position of the ball using a function with time as the input:
Time elapsed 1 second - Move ball to 25% along the line
Time elapsed 2 second - Move ball to 50% along the line
Time elapsed 3 second - Move ball to 75% along the line
You could also choose any random time and the ball would be placed properly because the new position does not depend on the previous position.It’s always just a function taking any time as an input, and spitting out a position as output.
Most animation systems work in a similar way, which is why regardless of what applications artists and animators choose it usually has a big timeline along the bottom.
So if that's how animation works, how is simulation different?
In simulation of water our function to calculate the new position takes not only time as an input but also the previous position of the water. This feedback loop is the key difference.
Since WPF does not encourage loops that let you feedback the output into the next input, we are kind of stuck.
What about CompositionTarget?
One ray of hope is the CompositionTarget. This is a WPF component specifically designed to enable controlling your own rendering loop.I consider CompositionTarget going a bit off the reservation because it’s not really a mainstream technique used by most apps, however it’s fairly clean and that’s ok if it can get us past the loop barrier. While this lets us have precise control over the rendering loop, it still does not allow any way for output of the shaders to feed back into the loop.
We could just not use shaders and write the code procedurally, but then the question becomes how much performance do shaders afford our simulation?
How fast are shaders in WPF?
VERY fast. In fact to get nice performance and realistic water, custom shaders are essentially required. It's an order(s) of magnitude difference. Using shaders should be no problem because WPF supports shader model 3.0.In fact in a previous blog post I showed a single drop water effect using WPF shaders and the code is simple XAML with no problems.But again, that was an animation not a simulation.The drops could not affect each other and you could not drag your finger through the water and create a wave as in the video above.
To get some real coolness we need a simulation, which means using the output from the shader as the input for the next time the shader is run.The problem is when you capture the output of shaders as data in WPF (rather than just display the output), you lose hardware acceleration and performance goes to heck in a handbasket. One item on my wishlist for WPF 5.0 would definitely be enabling a form of shader trees where the output can be modified and looped back as input.
To keep pixel shaders running in hardware in this fashion within a WPF app requires Direct3d, which in turn requires D3DImage.D3DImage is the class that can allow separate Direct3d C++ code to output into an image brush within a WPF app.It also allows (requires really) the Direct3d code to have it’s own rendering loop, which allows us to feed shader output back to the input for the simulation.
With Silverlight its worse because of the lack of GPU acceleration and because the Direct3d integration is not an option.I’ve seen a few liquid/particle simulations in Silverlight but they are quite slow, and tend to peg CPU utilization.
So the bottom line is to create interactive effects like this in WPF really requires CompositionTarget, custom shaders, and a separate Direct3d helper DLL.The first two techniques alone will do the same thing at about a quarter of the speed.
Every year I hope to never code in C++ again, but it looks like that time is not here quite yet.