From Unraid | Docs

Leap to: navigation, search

An update for v6 users, here are 2 FAQ entries, for removing 1 drive, or for removing multiple drives. The balance of the page is for v5 or lower.

  • How do I remove a difficult deejay that I do not plan on replacing?
  • How practice I remove multiple drives?

How exercise I remove a hd that I do non program on replacing?

  • Notation: this department is for v5 users!

There are a few reasons y'all'd want to do this: you desire to repurpose the drive for some other reason, or the drive has failed or is declining.

First I'll go over repurposing a working drive.

There are two main steps to this process.

  1. copy any data off the bulldoze back to your assortment so y'all don't lose any data
  2. reset the array parity

Before proceeding, make sure to take a screenshot of your current drive configuration!

I'chiliad going to do this by instance. Here is what my default unRAID web GUI looks similar

I've got seven drives. Data drives one-v, a parity drive, and a enshroud drive. I'g going to pretend that disk3 (located from the crush at /mnt/disk3) is the drive I want to remove. For my setup, I've got all my files located in user shares (from the shell /mnt/user/Television receiver, for instance). My user shares are all setup as "high water" allocation. If yous click on the folder icon, next to "disk3" you'll meet if there are whatever files located on that drive. In my example, there are. I need to copy those files off disk3 (/mnt/disk3) back to my user shares (/mnt/user).

Telnet into your auto and type "mc" into command line. You can use the GUI to easily and safely copy your existing information. If you're a commencement time MC user press F9 which will bring up the carte du jour and options. If you apply a enshroud bulldoze then simply copy the contents of disk3 to /mnt/cache to forbid duplicate issues. Make sure that you remove disk3 before the 'mover' script has chance to run though!

Another method involves using rsync which would be better if you wanted to only re-create the differences between the source and dest dirs. rsync -a /mnt/disk3/ /mnt/user volition copy the contents of disk3 to /mnt/user. Use the -northward flag to do a dry run.

  1. Stop the assortment by pressing "Stop" on the management interface. Un-assign the drive on the Devices folio, and so render to the unRAID Principal page.
  2. Select the 'Utils' tab
  3. Choose "New Config"
  4. Concur and create a new config
  5. Reassign all of the drives you wish to keep in the assortment
  6. Commencement the array and allow parity rebuild

On versions of unRAID prior to 5.0 [Beta ??]

Log in on the system panel or via telnet and type the control below to create a new system.dat file and reset the array configuration information.

initconfig          

unRAID volition ask you to confirm yous wish to set a new deejay configuration. you must reply with Yep (capital Y and lower case es)

When the initconfig command is invoked, old parity data volition be immediately discarded, and the process of parity calculation on the remaining assigned and working drives will begin when you next get-go the array. At this point, your array will not exist over again protected from a deejay failure until the system can consummate the process of generating new parity data.

UPDATE: A alternative process for removing a drive from the assortment tin exist constitute hither. It allows you to remove a bulldoze without losing parity. The tradeoff is that will take much longer.

On versions of unRAID prior to iv.5.4

  1. Cease the assortment past pressing "Terminate" on the management interface.
  2. Un-assign the drive on the Devices folio
  3. Render to the Chief page and cheque the checkbox below the Restore button (which is actually a Ready Initial Configuration push)
  4. Then click the Restore push button to create a new organization.dat file and reset the array configuration data.

When the Restore push is clicked, old parity data will be immediately discarded, and the process of parity calculation on the remaining assigned and working drives will begin. At this bespeak, your assortment will not exist once again protected from a disk failure until the organisation can complete the procedure of generating new parity information.