I am always looking for the most powerful tools in the smallest packages, so the Razer Blade 15″ laptop with a GeForce RTX 2080 Max-Q was worth checking out. The Max-Q variants are optimized for better thermals and power usage, at the potential expense of performance, in order to allow more powerful GPUs to be utilized in smaller laptops. The RTX 2080 is NVidia’s top end mobile GPU, with 2944 CUDA cores, 8GB of DDR6 Memory, running at 384GB/s and 13.6Billion transistors on the chip.
The new Razer Blade has a 6 Core Intel 8750H with 16GB RAM, and a 512GB SSD. It has mDP 1.4, HDMI 2.0b, Thunderbolt3, and 3x USB 3.1 ports. Its 15.6″ screen can run at 144hz refresh rate, but only supports full HD 1920×1080, which is optimized for gaming, not content creation. The last four laptops I have used have all been UHD resolution, at various sizes, which gives far more screen real estate for creative applications, and better resolution to review your imagery.
I also prefer to have an Ethernet port, but I am beginning to accept that a dongle might be acceptable for that, especially since it opens up the possibility of utilizing 10GbE. We aren’t going to see 10 Gigabit Ethernet on laptops anytime soon due to the excessive power consumption, but you only need 10GbE when at certain locations that support it, so a dongle or docking station is reasonable for those use cases.
Certain functionality on the system required a free account to be registered with Razer, which I find annoying, but is becoming the norm these days. That gives access to the Razer Synapse utility, for customizing the system settings, setting fan speed, and even remapping keyboard functionality. Any other Razer peripherals would be controlled here as well. As a top end modern gaming system, the keyboard has fully controllable color back lighting. While I find most of the default “effects” to be distracting, the option to color code your shortcut keys is interesting. And if you really want to go to the next level, you can customize it further. For example, when you press the FN key, by default the keys that have function behaviors connected with them light up white, which I was impressed by. The colors and dimming are generated by blinking the LEDs, but I was able to perceive the flicker when moving my eyes, so I stuck with colors that didn’t involve dimming channels. But that still gave me 6 options (RGB, CYM) plus white.
This is the color config I was running in the photos, but the camera does not reflect how it actually looks. In the pictures the keys look washed out, but in person, they are almost too bright and vibrant. But we are here for more than looks, so it was time to put it through its paces, and see what can happen under the hood. I ran a number of benchmarks, starting with Premiere Pro. I now have a consistent set of tests I run on there, to compare with previous systems. The tests involve Red, Venice, and Alexa source files, which various GPU effects applied, exported to compressed formats. It handled the 4K and 8K renders quite well, pretty comparable to the full desktop systems, showcasing the power of the RTX GPU. Under the sustained load of rendering for 30 minutes, it did get quite warm, which is to be expected, so you will want adequate ventilation, and you won’t want it actually sitting on top of your lap.
My next test was RedCine-X Pro, with its new CUDA playback acceleration of files up to 8K. But what is the point of decoding 8K if you can’t see all the pixels you are processing? So for this test, I also connected my Dell UP3218K screen to the Razer Blade’s MiniDisplayPort1.4 output. Outputting to the monitor does effect performance a bit, but that is a reasonable expectation. It doesn’t matter if you can decode 8K in real time if you can’t display it. NVidia provides reviewers with links to some test footage, but I have 40TB to choose from, as well as test clips from all different settings on the various cameras from my Large Format Camera test last year. The 4K Red files worked great at full res to the external monitor, full screen or pixel for pixel, while the system barely kept up with the 6K and 8k anamorphic files. 8K full frame required half res playback to view smoothly on the 8K display. 8K-FF was barely real time with the external monitor disabled, but that is still very impressive for a laptop, as I have yet to accomplish that on my desktop either, and the rest of the files played back solidly to the local display. Disabling the CUDA GPU acceleration requires playing back below 1/8th res to do anything on a laptop, so this is where having a powerful GPU makes a big difference.
DaVinci Resolve is the other major video editing program to consider, and while I do not find it intuitive to use myself, I usually recommend it to others who are looking for a high level of functionality, but aren’t ready to pay for Premiere Pro. I downloaded and rendered a test project from NVidia, which plays Blackmagic Raw files in real time, with a variety of effects, and renders to H.264 in 40 seconds, but takes 10 times longer with CUDA disabled in Resolve. Here, as with the other tests, the real world significance isn’t how much faster it is with a GPU than without, but how much faster is it with this RTX GPU, compared to with other options. NVidia claims this render takes 2 and a half times as long on a Radeon based MacBookPro, and 10% longer on a previous generation GTX 1080 laptop, which seems consistent with my previous experience and tests.
The primary differentiation of NVidia’s RTX line of GPUs is the inclusion of RT cores to accelerate ray tracing, and Tensor cores to accelerate AI inferencing. So I wanted to try tasks that utilized those accelerations. I started by testing Adobe’s AI based image enhancement in Lightroom CC Classic. NVidia claims that the AI image enhancement utilizes the RTX’s tensor cores, and it is four time faster with the RTX card active. The visual results of the process didn’t appear to be much better than I could have achieved with manual development in Photoshop, but it was a lot faster to let the computer figure out what to do to improve the images. I also ran into an issue where certain blocks of the image got corrupted in the process (pictured below) but I am not sure if Adobe or NVidia is at fault here.
While I could have used this review as an excuse to go play Battlefield 5, to experience ray tracing in video games, I stuck with the content creation focus. Looking for a way to test Ray Tracing, NVidia pointed me to OctaneRender. OTOY has created a utility called OctaneBench for measuring the performance of various hardware configurations with their render engine. It reported that the RTX’s ray tracing acceleration was giving me a 3x increase in render performance.
I also tested ProRender in Cinema4D, which is not a ray tracing renderer, but does utilize GPU acceleration thru OpenCL. Apparently there is a way to utilize the Arnold ray tracing engine in Cinema4D, but I was reaching the limits of my 3D animation expertise and resource, so I didn’t pursue that path, and didn’t test Maya for the same reason. With ProRender, I was able to render views of various demo scenes, 10-20 times faster than I could with CPU only. I will probably include this as a regular test in future reviews, allowing me to gauge render performance far better than I can with Cinebench (which returned a CPU score of 836). And compiling a list of comparison render times will add more context to raw data. But for now, I was able to render the demo “Bamboo” scene in 39 seconds, and the more complex “Coffee Bean” scene in 188 seconds, beating even NVidia’s marketing team’s expected results.
No test of a top end GPU would be complete without trying out it’s VR performance. I connected up my Lenovo Explorer WMR headset, installed SteamVR, and tested both 360 Video editing in Premiere Pro, and the true 3D experiences available in Steam. As would be expected, the experience was smooth, making this one of the most portable solutions for full performance VR.
The RTX 2080 is a great GPU, and I had no issues with it. Outside of true 3D work, the upgrade from the Pascal based GTX 1080 is minor, but for anyone upgrading from older systems than that, or doing true ray tracing or AI processing, you will see a noticeable improvement in performance. The new Razer Blade is a powerful laptop for it’s size, but I did have a few issues with it. Some of those, like the screen resolution, are due to its focus on gaming instead of content creation. But I also was not in love with the touch-pad, which is a frequent issue when switching between devices constantly. But in this case, right-clicking instead of left-clicking, and not registering movement when the mouse button was pressed were major headaches. This was only alleviated by connecting a mouse, and sticking with that, which I frequently do anyway. The power supply has a rather large connector on a cumbersome thick and stiff cord, but it isn’t going to be falling out once you get it inserted. Battery life will vary greatly, depending on how much processing power you are using.
These RTX chips are the first mobile GPU with dedicated RT cores, and with Tensor cores, as Volta based chips never came to laptops. So for anyone with processing needs that are accelerated by those developments, the new RTX chip is obviously worth the upgrade. If you want the fastest thing out there, this is it. (Or at least it was, until Razer added options for 9th Gen Intel processors this week, and a 4K OLED screen, an upgrade I would highly recommend for content creators.) The model I reviewed goes for $3000, and the new 9th Gen version with 240hz screen is the same price, while the 4K OLED Touch version costs an extra $300.
If you are looking for a more balanced solution, or are on a more limited budget, you should definitely compare it to the new GTX 16 line of mobile products that was just announced, to see which option is a better fit for your particular needs and budget. The development of eGPUs has definitely shifted this ideal target for my usage. While this system has a Thunderbolt 3 port, it is fast enough that you won’t see significant gains from an eGPU, but that comes at the expense of battery life and price. I am drawn to eGPUs because I only need maximum performance at my desk, but if you need top end graphics performance totally untethered, RTX Max-Q chips are the solution for you.
2019 razer pro 17UHD I upgraded this from 16gb to 64 GB of ram and it has I7 6 core and gtx2080 but it still seems like my old hp envy as far as uploading videos etc. am I doing something wrong here. I paid 5k altogether for this with 64gb ram and UHD and 3TB ssd dosent seem any faster than my old quadcore
It is not going to make uploading videos any faster, as that is likely limited by your internet connection, and not your system performance. Rendering your videos on the other hand, should be measurably faster.