Building a Cryptocurrency Mining Equipment – Part Four

This is Part Four ter a series on building a cryptocurrency mining equipment.

  1. Overclock GPUs to increase the mining hashrate
  2. Decrease power consumption
  3. Increase overall hardware utilization

Before discussing optimizations, let’s review the baseline benchmarks.


Thesis were the top-line benchmarks after adopting the revised mining strategy:

[0]: Measured at the wall using a Kill-a-watt.

[1]: Spil reported by nvidia-smi .

I drilled down into the power consumption using a Kill-a-watt, nvidia-smi output, and some ordinary math:

[Two]: Spil reported by nvidia-smi .

[Three]: Measured at the wall using a Kill-a-watt.

[Four]: Chassis ventilatoren were being run at maximum speed.

[Five]: Values were rounded to very first decimal place.

It’s also worth noting an externality that wasn’t captured above: the equipment made my apartment appreciably warmer, which te turn caused mij to run the air-conditioner more frequently. I didn’t attempt to quantify the influence that had on my energy bill, but I imagine it wasgoed non-trivial.

GPU Overclocking

I began the optimization effort by attempting to overclock the GPUs. (EVGA GPUs are ",overclocked out-of-the-box",, but it’s usually possible to thrust them firmer.) Internet research indicated that there are two overclocking methods available on Linux:

  1. via global settings specified by nvidia-settings
  2. via application-specific settings specified by nvidia-smi

I very first attempted to overclock using nvidia-settings , but quickly ran into trouble. I discovered that nvidia-settings will not detect a GPU unless:

  1. It is linked to a monitor
  2. The X server is running

Two wasgoed annoying, because I intended to run the equipment mostly headless. 1 wasgoed particularly troublesome, however, because I began using integrated graphics for movie after encountering BIOS issues ter Part Two. Because I hadn’t connected monitors to my GPUs, nvidia-settings simply wouldn’t detect them.

(It’s worth noting that reddit users actually overcame this problem by fastening fake monitors to the GPUs via xorg.conf , but I had lost rente ter nvidia-settings entirely at this point.)

Having given up on nvidia-settings , I turned my attention to nvidia-smi .

Vanaf the NVIDIA documentation, it should require two steps to set application clocks:

  1. Determine supported clock speeds: nvidia-smi -q -d SUPPORTED_CLOCKS
  2. Set the desired clock speeds: nvidia-smi -ac $VRAM,$GPU

However, this simply didn’t work on my system:

I never discovered the failure’s root cause. Instead, I determined that puny spectacle gains were not worth a large headache, and talent up on overclocking the GPUs.

GPU Undervolting

With overclocking a bust, I determined to attempt undervolting instead.

On Linux, undervolting is managed using nvidia-smi . Contrary to overclocking, undervolting proved to be remarkably plain:

This instructs the GPUs to self-tune to serve with a power budget of $WATTS . (I don’t know how the GPUs manage that internally.)

Originally, each GPU consumed approximately 130 watts. I used trial-and-error to incrementally reduce the permitted wattage, while monitoring the effect on hashrate.

85 watts proved to be the ",sweet spot",. At 85 watts, each GPU continued to hash at its maximum rate – it simply did so more efficiently. With 6 GPUs, that adjustment saved 270 watts without introducing a show penalty.

With the wattage dialed ter, I next wrote a petite shell script to spare mij from having to apply thesis settings by hand when embarking the miner:

The miner ran noticeably cooler after making thesis switches.

Motherboard BIOS Adjustments

With the large, evident power savings made te the GPUs, I turned my attention to smaller optimizations te the motherboard BIOS. My strategy wasgoed simply to turn off all functionality that I wasn’t using with the hope of saving a few watts.

I booted into BIOS and made the following switches:

Settings\Advanced\PCI Subsystem Settings

  • PEG <0,1>- Max Verbinding Speed =>, [Gen1]

Settings\Advanced\Integrated Peripherals

  • HD Audio Controller =>, [Disabled]
  • HPET =>, [Disabled]

Settings\Advanced\Super IO Configuration\Serial(COM) Port 0 Configuration

  • Serial(COM) Port0 =>, [Disabled]

Settings\Advanced\Super IO Configuration\Parallel(LPT) Port Configuration

  • Parallel(LPT) Port =>, [Disabled]

Overclocking\CPU Features

  • Intel Virtualization Tech =>, [Disabled]

Thesis switches saved about Two watts te total (spil measured out of the wall).

The equipment now ran much cooler, so I wasgoed able to turn the chassis ventilatoren down to their lowest setting. This saved Two watts.

After undervolting the GPUs, I most likely could have eliminated the chassis ventilatoren entirely. I opted to keep them, tho’, preferring to spend a few toegevoegd watts to keep the GPUs spil cool spil possible. (The ventilatoren may also prove necessary with other computing I intend to do te the future, like hash-cracking.)

Final Benchmarks

This wasgoed the fresh top-line after making the above switches:

Related movie: [Hindi] How to make a Free Android App ter Minutes | Android App Review #24

Leave a Reply

Your email address will not be published. Required fields are marked *