<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>raidz1 &#8211; kema&#039;s Homepage</title>
	<atom:link href="https://kemanai.jp/tag/raidz1/feed/" rel="self" type="application/rss+xml" />
	<link>https://kemanai.jp</link>
	<description>kemaの雑記置き場</description>
	<lastBuildDate>Sat, 05 Jan 2019 16:37:20 +0000</lastBuildDate>
	<language>ja</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	<generator>https://wordpress.org/?v=6.9.4</generator>
	<item>
		<title>FreeBSD zpoolストレージのあれこれtips（作業メモ）</title>
		<link>https://kemanai.jp/2018/12/30/freebsd-zpool%e3%82%b9%e3%83%88%e3%83%ac%e3%83%bc%e3%82%b8%e3%81%ae%e3%81%82%e3%82%8c%e3%81%93%e3%82%8ctips%ef%bc%88%e4%bd%9c%e6%a5%ad%e3%83%a1%e3%83%a2%ef%bc%89/</link>
					<comments>https://kemanai.jp/2018/12/30/freebsd-zpool%e3%82%b9%e3%83%88%e3%83%ac%e3%83%bc%e3%82%b8%e3%81%ae%e3%81%82%e3%82%8c%e3%81%93%e3%82%8ctips%ef%bc%88%e4%bd%9c%e6%a5%ad%e3%83%a1%e3%83%a2%ef%bc%89/#respond</comments>
		
		<dc:creator><![CDATA[kema]]></dc:creator>
		<pubDate>Sun, 30 Dec 2018 14:33:10 +0000</pubDate>
				<category><![CDATA[サーバ管理]]></category>
		<category><![CDATA[FreeBSD]]></category>
		<category><![CDATA[日記・雑記]]></category>
		<category><![CDATA[趣味]]></category>
		<category><![CDATA[コンピュータ]]></category>
		<category><![CDATA[HDD交換]]></category>
		<category><![CDATA[raidz1]]></category>
		<category><![CDATA[raidz3]]></category>
		<category><![CDATA[zfs]]></category>
		<category><![CDATA[zpool]]></category>
		<guid isPermaLink="false">http://wp.khz-net.co.jp/?p=2877</guid>

					<description><![CDATA[zfsのストレージプールを作っている。 dmesgの結果は次の通り。 da0 at umass-sim0 bus 0 scbus5 target 0 lun 0 da0: &#60;ST8000DM 005-2EH112  &#8230; <a href="https://kemanai.jp/2018/12/30/freebsd-zpool%e3%82%b9%e3%83%88%e3%83%ac%e3%83%bc%e3%82%b8%e3%81%ae%e3%81%82%e3%82%8c%e3%81%93%e3%82%8ctips%ef%bc%88%e4%bd%9c%e6%a5%ad%e3%83%a1%e3%83%a2%ef%bc%89/" class="more-link">続きを読む <span class="screen-reader-text">FreeBSD zpoolストレージのあれこれtips（作業メモ）</span> <span class="meta-nav">&#8594;</span></a>]]></description>
										<content:encoded><![CDATA[<p>zfsのストレージプールを作っている。</p>
<p>dmesgの結果は次の通り。</p>
<blockquote><p>da0 at umass-sim0 bus 0 scbus5 target 0 lun 0<br />
da0: &lt;ST8000DM 005-2EH112 1520&gt; Fixed Direct Access SPC-4 SCSI device<br />
da0: Serial Number 152D00539000<br />
da0: 400.000MB/s transfers<br />
da0: 7630885MB (15628053168 512 byte sectors)<br />
da0: quirks=0xa&lt;NO_6_BYTE,4K&gt;<br />
da1 at umass-sim0 bus 0 scbus5 target 0 lun 1<br />
da2 at umass-sim0 bus 0 scbus5 target 0 lun 2<br />
da1: &lt;WDC WD80 PUZX-64NEAY0 1520&gt; Fixed Direct Access SPC-4 SCSI device<br />
da1: Serial Number 152D00539000<br />
da1: 400.000MB/s transfers<br />
da1: 7630885MB (15628053168 512 byte sectors)<br />
da1: quirks=0x2&lt;NO_6_BYTE&gt;<br />
da2: &lt;WDC WD80 PUZX-64NEAY0 1520&gt; Fixed Direct Access SPC-4 SCSI device<br />
da2: Serial Number 152D00539000<br />
da2: 400.000MB/s transfers<br />
da2: 7630885MB (15628053168 512 byte sectors)<br />
da2: quirks=0x2&lt;NO_6_BYTE&gt;<br />
da3 at umass-sim0 bus 0 scbus5 target 0 lun 3<br />
da3: &lt;WDC WD80 EFZX-68UW8N0 1520&gt; Fixed Direct Access SPC-4 SCSI device<br />
da3: Serial Number 152D00539000<br />
da3: 400.000MB/s transfers<br />
da3: 7630885MB (15628053168 512 byte sectors)<br />
da3: quirks=0x2&lt;NO_6_BYTE&gt;<br />
da4 at umass-sim0 bus 0 scbus5 target 0 lun 4<br />
da4: &lt;WDC WD80 EFZX-68UW8N0 1520&gt; Fixed Direct Access SPC-4 SCSI device<br />
da4: Serial Number 152D00539000<br />
da4: 400.000MB/s transfers<br />
da4: 7630885MB (15628053168 512 byte sectors)<br />
da4: quirks=0x2&lt;NO_6_BYTE&gt;<br />
da5 at umass-sim0 bus 0 scbus5 target 0 lun 5<br />
da5: &lt;ST8000AS 0002-1NA17Z 1520&gt; Fixed Direct Access SPC-4 SCSI device<br />
da5: Serial Number 152D00539000<br />
da5: 400.000MB/s transfers<br />
da5: 7630885MB (15628053168 512 byte sectors)<br />
da5: quirks=0x2&lt;NO_6_BYTE&gt;<br />
random: unblocking device.<br />
da6 at umass-sim0 bus 0 scbus5 target 0 lun 6<br />
da6: &lt;ST8000AS 0002-1NA17Z 1520&gt; Fixed Direct Access SPC-4 SCSI device<br />
da6: Serial Number 152D00539000<br />
da6: 400.000MB/s transfers<br />
da6: 7630885MB (15628053168 512 byte sectors)<br />
da6: quirks=0x2&lt;NO_6_BYTE&gt;<br />
Trying to mount root from zfs:zroot/ROOT/default []&#8230;<br />
da7 at umass-sim0 bus 0 scbus5 target 0 lun 7<br />
da7: &lt;ST8000AS 0002-1NA17Z 1520&gt; Fixed Direct Access SPC-4 SCSI device<br />
da7: Serial Number 152D00539000<br />
da7: 400.000MB/s transfers<br />
da7: 7630885MB (15628053168 512 byte sectors)<br />
da7: quirks=0x2&lt;NO_6_BYTE&gt;<br />
da8 at umass-sim0 bus 0 scbus5 target 0 lun 8<br />
da8: &lt;ST8000AS 0002-1NA17Z 1520&gt; Fixed Direct Access SPC-4 SCSI device<br />
da8: Serial Number 152D00539000<br />
da8: 400.000MB/s transfers<br />
da8: 7630885MB (15628053168 512 byte sectors)<br />
da8: quirks=0x2&lt;NO_6_BYTE&gt;</p></blockquote>
<p>つまり、</p>
<p>da0:8TB（ST8000DM)</p>
<p>da1:8TB（WD80PUZX)</p>
<p>da2:8TB（WD80PUZX)</p>
<p>da3:8TB（WD80EFZX)</p>
<p>da4:8TB（WD80EFZX)</p>
<p>da5:8TB（ST8000AS)</p>
<p>da6:8TB（ST8000AS)</p>
<p>da7:8TB（ST8000AS)</p>
<p>da8:8TB（ST8000AS)</p>
<p>となっている。続いてzpoolのステータス</p>
<blockquote><p># zpool status<br />
pool: zbackup<br />
state: DEGRADED<br />
status: One or more devices could not be opened. Sufficient replicas exist for<br />
the pool to continue functioning in a degraded state.<br />
action: Attach the missing device and online it using &#8216;zpool online&#8217;.<br />
see: http://illumos.org/msg/ZFS-8000-2Q<br />
scan: resilvered 0 in 0h0m with 0 errors on Mon Apr 23 06:29:16 2018<br />
config:</p>
<p>NAME STATE READ WRITE CKSUM<br />
zbackup DEGRADED 0 0 0<br />
raidz1-0 DEGRADED 0 0 0<br />
da8 ONLINE 0 0 0<br />
5017281946433150361 UNAVAIL 0 0 0 was /dev/da4<br />
da0 ONLINE 0 0 0<br />
da5 ONLINE 0 0 0</p>
<p>errors: No known data errors</p>
<p>pool: zdata<br />
state: ONLINE<br />
status: Some supported features are not enabled on the pool. The pool can<br />
still be used, but some features are unavailable.<br />
action: Enable all features using &#8216;zpool upgrade&#8217;. Once this is done,<br />
the pool may no longer be accessible by software that does not support<br />
the features. See zpool-features(7) for details.<br />
scan: none requested<br />
config:</p>
<p>NAME STATE READ WRITE CKSUM<br />
zdata ONLINE 0 0 0<br />
raidz3-0 ONLINE 0 0 0<br />
da1 ONLINE 0 0 0<br />
da2 ONLINE 0 0 0<br />
da3 ONLINE 0 0 0<br />
da4 ONLINE 0 0 0<br />
da6 ONLINE 0 0 0</p>
<p>errors: No known data errors</p>
<p>pool: zroot<br />
state: ONLINE<br />
status: Some supported features are not enabled on the pool. The pool can<br />
still be used, but some features are unavailable.<br />
action: Enable all features using &#8216;zpool upgrade&#8217;. Once this is done,<br />
the pool may no longer be accessible by software that does not support<br />
the features. See zpool-features(7) for details.<br />
scan: none requested<br />
config:</p>
<p>NAME STATE READ WRITE CKSUM<br />
zroot ONLINE 0 0 0<br />
ada0p4 ONLINE 0 0 0</p>
<p>errors: No known data errors</p></blockquote>
<p>da1・da2・da3・da4・da6でRAIDZ3。これはトリプルパリティで最大3台までのHDDが同時に死んでもデータは保全されるってやつ。8TBのHDDを5台使って容量16TB。</p>
<p>…うーん。安全性(ryだけどちょっと無駄が多すぎる希ガス</p>
<p>da0・da4・da5・da8はRAIDZ1で24TBのストレージ。しかしエラーが発生してる…。</p>
<p>zpoolコマンドで復旧してみるってのもひとつの経験としてアリだけど、年末大出血(って何)ってことで東芝の14TBのHDDを3台購入してきましたわけで。</p>
<h1><strong><span style="color: #ff0000;">以上の状態を踏まえてですね、</span></strong></h1>
<p>さて、どうするか。</p>
<p>まず、raidz3は少々やり過ぎな気もしないでもないけど、しかし海門のHDDには過去痛い目に遭ってるから、なるべく冗長度は大きくしたい。</p>
<p>ちなみに現在の使用量は、df -gすると</p>
<blockquote><p>zdata 14327 8465 5861 59% /usr/home/jail/ほげほげ</p></blockquote>
<p>って訳でまぁ概算9TB弱ぐらい使ってる。</p>
<p>って事は、14TBのHDDで新たにRAIDZ3を組んで14TBのストレーｊ</p>
<h3><span style="color: #ff0000;">いや、3台じゃRAIDZ3組めないでしょ</span></h3>
<p>しまった。</p>
<p>…しかし、raidzの特徴として、<span style="color: #ff0000;">現在のストレージプールを構成しているハードディスクの容量が増えれば、勝手にプールの容量も増える</span>という、かつてnewfsとかシコシコやっていた頃に比べると考えれない利便性があるとのこと。</p>
<p>じゃあ、次の方針でやってみよう。</p>
<ol>
<li>現在raidz3を構成している、da1・da2・da3・da4・da6のうち3台を14TBに交換。そしてまたそのうち金があったら14TBを2枚買い足す( ﾉД`)ｼｸｼｸ…</li>
<li>その外した3台の8TBのHDDのうちどれかをda4の代わりに差し替える。</li>
</ol>
<p>あーでも、現在使っているストレージタワー(裸族の云々)はフルスロットル埋まっているから、手順としては次のようにせざるを得ないね。</p>
<ol>
<li>da4を物理的に切り離し、代わりに14TBの生HDDを突っ込む。</li>
<li>zdataからどれか1台を離脱させる。</li>
<li>離脱したHDDをzbackupに入れる。</li>
<li>zdataに14TBのHDDを参加させる。</li>
</ol>
<p>後はビルドが完了次第、zdataのうちどれか2台の8TBを離脱させ、そこに14TBのHDDを入れてやれば良いということ。ばっちぐー(古</p>
<hr />
<p>１．障害が発生しているda4を切り離す作業</p>
<blockquote><p>#zpool offline zbackup da4</p></blockquote>
<p>とかやってみると、zpool statusした結果は、</p>
<blockquote><p>pool: zbackup<br />
state: DEGRADED<br />
status: One or more devices has been taken offline by the administrator.<br />
Sufficient replicas exist for the pool to continue functioning in a<br />
degraded state.<br />
action: Online the device using &#8216;zpool online&#8217; or replace the device with<br />
&#8216;zpool replace&#8217;.<br />
scan: resilvered 0 in 0h0m with 0 errors on Mon Apr 23 06:29:16 2018<br />
config:</p>
<p>NAME STATE READ WRITE CKSUM<br />
zbackup DEGRADED 0 0 0<br />
raidz1-0 DEGRADED 0 0 0<br />
da8 ONLINE 0 0 0<br />
5017281946433150361 OFFLINE 0 0 0 was /dev/da4<br />
da0 ONLINE 0 0 0<br />
da5 ONLINE 0 0 0</p></blockquote>
<p>お、ちゃんとオフラインになった。</p>
<p>２．zdataからどれか1台を離脱させる作業</p>
<p>これ実稼働中のファイルシステムに行うのがメチャクチャ怖い…。もちろんraidz3だからHDDを離脱させても問題ないのは分かってるけど、心臓に悪いね。</p>
<p>（ST8000ASとST8000DMってどっちが信頼性高いんだろう。ASはアーカイブ用でDMの方が性能は高いのかな。でも昔クラッシュしてえらい目に遭ったのはDMだったよな。うーん…と5chで情報収集して悩む事ン十分)</p>
<p>&nbsp;</p>
<p>・・・ん？</p>
<p>なんでda4が両方のストレージプールにあったの？（？＿？）←今気づいた</p>
<p>と思って良く見たら。</p>
<p>da0・da5・da8でraidz0で構成しているのがzbackup</p>
<p>da1・da2・da3・da4・da6でraidz3を構成しているのがzdata</p>
<p>何だか良く分からなくなってきたから、da4を落とそう。えいやっ</p>
<blockquote><p># zpool offline zdata da4</p></blockquote>
<p>でzpool statusをすると</p>
<blockquote><p># zpool status<br />
pool: zbackup<br />
state: DEGRADED<br />
status: One or more devices has been taken offline by the administrator.<br />
Sufficient replicas exist for the pool to continue functioning in a<br />
degraded state.<br />
action: Online the device using &#8216;zpool online&#8217; or replace the device with<br />
&#8216;zpool replace&#8217;.<br />
scan: resilvered 0 in 0h0m with 0 errors on Mon Apr 23 06:29:16 2018<br />
config:</p>
<p>NAME STATE READ WRITE CKSUM<br />
zbackup DEGRADED 0 0 0<br />
raidz1-0 DEGRADED 0 0 0<br />
da8 ONLINE 0 0 0<br />
5017281946433150361 OFFLINE 0 0 0 was /dev/da4<br />
da0 ONLINE 0 0 0<br />
da5 ONLINE 0 0 0</p>
<p>errors: No known data errors</p>
<p>pool: zdata<br />
state: DEGRADED<br />
status: One or more devices has been taken offline by the administrator.<br />
Sufficient replicas exist for the pool to continue functioning in a<br />
degraded state.<br />
action: Online the device using &#8216;zpool online&#8217; or replace the device with<br />
&#8216;zpool replace&#8217;.<br />
scan: none requested<br />
config:</p>
<p>NAME STATE READ WRITE CKSUM<br />
zdata DEGRADED 0 0 0<br />
raidz3-0 DEGRADED 0 0 0<br />
da1 ONLINE 0 0 0<br />
da2 ONLINE 0 0 0<br />
da3 ONLINE 0 0 0<br />
7675701080755519488 OFFLINE 0 0 0 was /dev/da4<br />
da6 ONLINE 0 0 0</p>
<p>errors: No known data errors</p></blockquote>
<p>…大丈夫なんだろうか。</p>
<p>とりま、ここでda4を物理的に外して、代わりに14TBの1枚目を投入してみる。</p>
<p>またこれがアナログ的に手法で、</p>
<blockquote><p>#cat /dev/da4 &gt; /dev/null</p></blockquote>
<p>とかやってアクセスランプがパカパカするHDDを見つけるという方法。</p>
<p>よし。</p>
<blockquote><p>#shutdown -p now</p></blockquote>
<p>してHDDを取り出…</p>
<h3><span style="color: #ff0000;">あれ？da7はどこに行ったの？</span></h3>
<p>&nbsp;</p>
<p>そう。実は古いgmirrorな使い方をしていた/dev/da7が遊んでいることに今気づいたのですよ。</p>
<p>ってー訳で、da4とda7を取り外して14TBを突っ込む。dmesgしてみると、</p>
<blockquote><p>da0 at umass-sim0 bus 0 scbus5 target 0 lun 0<br />
da0: &lt;ST8000AS 0002-1NA17Z 1520&gt; Fixed Direct Access SPC-4 SCSI device<br />
da0: Serial Number 152D00539000<br />
da0: 400.000MB/s transfers<br />
da0: 7630885MB (15628053168 512 byte sectors)<br />
da0: quirks=0x2&lt;NO_6_BYTE&gt;<br />
da1 at umass-sim0 bus 0 scbus5 target 0 lun 1<br />
da1: &lt;ST8000AS 0002-1NA17Z 1520&gt; Fixed Direct Access SPC-4 SCSI device<br />
da1: Serial Number 152D00539000<br />
da2 at umass-sim0 bus 0 scbus5 target 0 lun 2<br />
da1: 400.000MB/s transfers<br />
da1: 7630885MB (15628053168 512 byte sectors)<br />
da1: quirks=0x2&lt;NO_6_BYTE&gt;<br />
da2: &lt;TOSHIBA MN07ACA14T 1520&gt; Fixed Direct Access SPC-4 SCSI device<br />
da2: Serial Number 152D00539000<br />
da2: 400.000MB/s transfers<br />
da2: 13351936MB (27344764928 512 byte sectors)<br />
da2: quirks=0x2&lt;NO_6_BYTE&gt;<br />
da3 at umass-sim0 bus 0 scbus5 target 0 lun 3<br />
da3: &lt;ST8000AS 0002-1NA17Z 1520&gt; Fixed Direct Access SPC-4 SCSI device<br />
da3: Serial Number 152D00539000<br />
da3: 400.000MB/s transfers<br />
da3: 7630885MB (15628053168 512 byte sectors)<br />
da3: quirks=0x2&lt;NO_6_BYTE&gt;<br />
da4 at umass-sim0 bus 0 scbus5 target 0 lun 4<br />
da4: &lt;ST8000AS 0002-1NA17Z 1520&gt; Fixed Direct Access SPC-4 SCSI device<br />
da4: Serial Number 152D00539000<br />
da4: 400.000MB/s transfers<br />
da4: 7630885MB (15628053168 512 byte sectors)<br />
da4: quirks=0x2&lt;NO_6_BYTE&gt;<br />
random: unblocking device.<br />
da5 at umass-sim0 bus 0 scbus5 target 0 lun 5<br />
da5: &lt;ST8000DM 005-2EH112 1520&gt; Fixed Direct Access SPC-4 SCSI device<br />
da5: Serial Number 152D00539000<br />
da5: 400.000MB/s transfers<br />
da5: 7630885MB (15628053168 512 byte sectors)<br />
da5: quirks=0xa&lt;NO_6_BYTE,4K&gt;<br />
Trying to mount root from zfs:zroot/ROOT/default []&#8230;<br />
da6 at umass-sim0 bus 0 scbus5 target 0 lun 6<br />
da6: &lt;WDC WD80 PUZX-64NEAY0 1520&gt; Fixed Direct Access SPC-4 SCSI device<br />
da6: Serial Number 152D00539000<br />
da6: 400.000MB/s transfers<br />
da6: 7630885MB (15628053168 512 byte sectors)<br />
da6: quirks=0x2&lt;NO_6_BYTE&gt;<br />
da7 at umass-sim0 bus 0 scbus5 target 0 lun 7<br />
da7: &lt;WDC WD80 PUZX-64NEAY0 1520&gt; Fixed Direct Access SPC-4 SCSI device<br />
da7: Serial Number 152D00539000<br />
da7: 400.000MB/s transfers<br />
da7: 7630885MB (15628053168 512 byte sectors)<br />
da7: quirks=0x2&lt;NO_6_BYTE&gt;<br />
da8 at umass-sim0 bus 0 scbus5 target 0 lun 8<br />
da8: &lt;WDC WD80 EFZX-68UW8N0 1520&gt; Fixed Direct Access SPC-4 SCSI device<br />
da8: Serial Number 152D00539000<br />
da8: 400.000MB/s transfers<br />
da8: 7630885MB (15628053168 512 byte sectors)<br />
da8: quirks=0x2&lt;NO_6_BYTE&gt;<br />
da9 at umass-sim0 bus 0 scbus5 target 0 lun 9<br />
da9: &lt;TOSHIBA MN07ACA14T 1520&gt; Fixed Direct Access SPC-4 SCSI device<br />
da9: Serial Number 152D00539000<br />
da9: 400.000MB/s transfers<br />
da9: 13351936MB (27344764928 512 byte sectors)<br />
da9: quirks=0x2&lt;NO_6_BYTE&gt;</p></blockquote>
<p>ん？da2とda9でマウントされてる。<strong><del><span style="color: #ff0000;">まあいいや</span></del></strong></p>
<blockquote><p># zpool status<br />
pool: zbackup<br />
state: DEGRADED<br />
status: One or more devices has been taken offline by the administrator.<br />
Sufficient replicas exist for the pool to continue functioning in a<br />
degraded state.<br />
action: Online the device using &#8216;zpool online&#8217; or replace the device with<br />
&#8216;zpool replace&#8217;.<br />
scan: resilvered 0 in 0h0m with 0 errors on Mon Apr 23 06:29:16 2018<br />
config:</p>
<p>NAME STATE READ WRITE CKSUM<br />
zbackup DEGRADED 0 0 0<br />
raidz1-0 DEGRADED 0 0 0<br />
da3 ONLINE 0 0 0<br />
5017281946433150361 OFFLINE 0 0 0 was /dev/da4<br />
da5 ONLINE 0 0 0<br />
da0 ONLINE 0 0 0</p>
<p>errors: No known data errors</p>
<p>pool: zdata<br />
state: DEGRADED<br />
status: One or more devices has been taken offline by the administrator.<br />
Sufficient replicas exist for the pool to continue functioning in a<br />
degraded state.<br />
action: Online the device using &#8216;zpool online&#8217; or replace the device with<br />
&#8216;zpool replace&#8217;.<br />
scan: none requested<br />
config:</p>
<p>NAME STATE READ WRITE CKSUM<br />
zdata DEGRADED 0 0 0<br />
raidz3-0 DEGRADED 0 0 0<br />
da6 ONLINE 0 0 0<br />
da7 ONLINE 0 0 0<br />
da8 ONLINE 0 0 0<br />
7675701080755519488 OFFLINE 0 0 0 was /dev/da4<br />
da1 ONLINE 0 0 0</p>
<p>errors: No known data errors</p>
<p>pool: zroot<br />
state: ONLINE<br />
status: Some supported features are not enabled on the pool. The pool can<br />
still be used, but some features are unavailable.<br />
action: Enable all features using &#8216;zpool upgrade&#8217;. Once this is done,<br />
the pool may no longer be accessible by software that does not support<br />
the features. See zpool-features(7) for details.<br />
scan: none requested<br />
config:</p>
<p>NAME STATE READ WRITE CKSUM<br />
zroot ONLINE 0 0 0<br />
ada0p4 ONLINE 0 0 0</p>
<p>errors: No known data errors</p></blockquote>
<p>よしよし。じゃー突っ込むよー</p>
<p>あれ？こんなコマンドを使うのかな？</p>
<p># zpool replace zdata da4 da2</p>
<p>そしたら表示が</p>
<blockquote><p>pool: zdata<br />
state: DEGRADED<br />
status: One or more devices is currently being resilvered. The pool will<br />
continue to function, possibly in a degraded state.<br />
action: Wait for the resilver to complete.<br />
scan: resilver in progress since Sun Dec 30 22:24:04 2018<br />
32.3M scanned out of 19.5T at 2.49M/s, (scan is slow, no estimated time)<br />
6.22M resilvered, 0.00% done<br />
config:</p>
<p>NAME STATE READ WRITE CKSUM<br />
zdata DEGRADED 0 0 0<br />
raidz3-0 DEGRADED 0 0 0<br />
da6 ONLINE 0 0 0<br />
da7 ONLINE 0 0 0<br />
da8 ONLINE 0 0 0<br />
replacing-3 DEGRADED 0 0 0<br />
7675701080755519488 OFFLINE 0 0 0 was /dev/da4<br />
da2 ONLINE 0 0 0<br />
da1 ONLINE 0 0 0</p>
<p>errors: No known data errors</p></blockquote>
<p>こんな風になった。多分これでリビルド（？）してるんだろう。（とHDDのアクセスランプを見に行く）</p>
<p>raidz3だったら同時に2台目も入れ替えても大丈夫だろうけど、<del>若い頃だったら多分突っ走ってやってただろうけど大人になった今は</del>そういう危ない事はやらない。</p>
<p>で、後はda4をzbackupに復帰させてみる。</p>
<blockquote><p># zpool online zbackup da4</p>
<p># zpool status<br />
pool: zbackup<br />
state: ONLINE<br />
status: One or more devices is currently being resilvered. The pool will<br />
continue to function, possibly in a degraded state.<br />
action: Wait for the resilver to complete.<br />
scan: resilver in progress since Sun Dec 30 22:36:02 2018<br />
870M scanned out of 12.5T at 39.5M/s, 92h25m to go<br />
197M resilvered, 0.01% done<br />
config:</p>
<p>NAME STATE READ WRITE CKSUM<br />
zbackup ONLINE 0 0 0<br />
raidz1-0 ONLINE 0 0 0<br />
da3 ONLINE 0 0 0<br />
da4 ONLINE 0 0 0<br />
da5 ONLINE 0 0 0<br />
da0 ONLINE 0 0 0</p>
<p>errors: No known data errors</p>
<p>pool: zdata<br />
state: DEGRADED<br />
status: One or more devices is currently being resilvered. The pool will<br />
continue to function, possibly in a degraded state.<br />
action: Wait for the resilver to complete.<br />
scan: resilver in progress since Sun Dec 30 22:24:04 2018<br />
57.1G scanned out of 19.5T at 79.0M/s, 71h33m to go<br />
11.4G resilvered, 0.29% done<br />
config:</p>
<p>NAME STATE READ WRITE CKSUM<br />
zdata DEGRADED 0 0 0<br />
raidz3-0 DEGRADED 0 0 0<br />
da6 ONLINE 0 0 0<br />
da7 ONLINE 0 0 0<br />
da8 ONLINE 0 0 0<br />
replacing-3 DEGRADED 0 0 0<br />
7675701080755519488 OFFLINE 0 0 0 was /dev/da4<br />
da2 ONLINE 0 0 0<br />
da1 ONLINE 0 0 0</p>
<p>errors: No known data errors</p>
<p>pool: zroot<br />
state: ONLINE<br />
status: Some supported features are not enabled on the pool. The pool can<br />
still be used, but some features are unavailable.<br />
action: Enable all features using &#8216;zpool upgrade&#8217;. Once this is done,<br />
the pool may no longer be accessible by software that does not support<br />
the features. See zpool-features(7) for details.<br />
scan: none requested<br />
config:</p>
<p>NAME STATE READ WRITE CKSUM<br />
zroot ONLINE 0 0 0<br />
ada0p4 ONLINE 0 0 0</p>
<p>errors: No known data errors</p></blockquote>
<p>両方合わせて70時間ぐらい待ってれば良いっぽい。</p>
<p>という訳で今年の作業終了～。ふー</p>
<hr />
<p>2019/1/3追記。</p>
<p>ビルドが終わったようなので、14TBのHDD2台目を行ってみる。</p>
<blockquote><p># zpool replace zdata da1 da9</p>
<p># zpool status zdata<br />
pool: zdata<br />
state: ONLINE<br />
status: One or more devices is currently being resilvered. The pool will<br />
continue to function, possibly in a degraded state.<br />
action: Wait for the resilver to complete.<br />
scan: resilver in progress since Thu Jan 3 05:56:07 2019<br />
66.1M scanned out of 19.5T at 2.00M/s, (scan is slow, no estimated time)<br />
12.6M resilvered, 0.00% done<br />
config:</p>
<p>NAME STATE READ WRITE CKSUM<br />
zdata ONLINE 0 0 0<br />
raidz3-0 ONLINE 0 0 0<br />
da6 ONLINE 0 0 0<br />
da7 ONLINE 0 0 0<br />
da8 ONLINE 0 0 0<br />
da2 ONLINE 0 0 0<br />
replacing-4 ONLINE 0 0 0<br />
da1 ONLINE 0 0 0<br />
da9 ONLINE 0 0 0</p>
<p>errors: No known data errors</p></blockquote>
<p>よーし、これでまたリビルド待ち。</p>
]]></content:encoded>
					
					<wfw:commentRss>https://kemanai.jp/2018/12/30/freebsd-zpool%e3%82%b9%e3%83%88%e3%83%ac%e3%83%bc%e3%82%b8%e3%81%ae%e3%81%82%e3%82%8c%e3%81%93%e3%82%8ctips%ef%bc%88%e4%bd%9c%e6%a5%ad%e3%83%a1%e3%83%a2%ef%bc%89/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
	</channel>
</rss>
