Is this likely to cause me problems?
Is this likely to cause me problems?
am 21.09.2010 22:33:21 von Jon Hardcastle
Hi,
I am finally replacing an old and now failed drive with a new one.
I normally create a partition the size of the entire disk and add that but whilst checking the sizes marry up i noticed that is an odity...
Below is an fdisk dump of all the drives in my RAID6 array
sdc---
/dev/sdc1 2048 1953525167 976761560 fd Linux raid autodetect
---
Seems to be different to sda say which is also '1TB'
sda---
/dev/sda1 63 1953520064 976760001 fd Linux raid autodetect
---
Now i read somewhere that the sizes flucuate but as some core value remains the same can anyone confirm if this is the case?
I am reluctant to add to my array until i know for sure...
Disk /dev/sda: 1000.2 GB, 1000204886016 bytes
255 heads, 63 sectors/track, 121601 cylinders, total 1953525168 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0xabb7ea39
Device Boot Start End Blocks Id System
/dev/sda1 63 1953520064 976760001 fd Linux raid autodetect
Disk /dev/sdb: 500.1 GB, 500107862016 bytes
255 heads, 63 sectors/track, 60801 cylinders, total 976773168 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000
Device Boot Start End Blocks Id System
/dev/sdb1 63 976768064 488384001 fd Linux raid autodetect
Disk /dev/sdc: 1000.2 GB, 1000204886016 bytes
81 heads, 63 sectors/track, 382818 cylinders, total 1953525168 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0xc7314361
Device Boot Start End Blocks Id System
/dev/sdc1 2048 1953525167 976761560 fd Linux raid autodetect
Disk /dev/sdd: 750.2 GB, 750156374016 bytes
255 heads, 63 sectors/track, 91201 cylinders, total 1465149168 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000
Device Boot Start End Blocks Id System
/dev/sdd1 63 1465144064 732572001 fd Linux raid autodetect
Disk /dev/sde: 1000.2 GB, 1000204886016 bytes
255 heads, 63 sectors/track, 121601 cylinders, total 1953525168 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000
Device Boot Start End Blocks Id System
/dev/sde1 63 1953520064 976760001 fd Linux raid autodetect
Disk /dev/sdf: 1000.2 GB, 1000204886016 bytes
255 heads, 63 sectors/track, 121601 cylinders, total 1953525168 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x4d291cc0
Device Boot Start End Blocks Id System
/dev/sdf1 63 1953520064 976760001 fd Linux raid autodetect
Disk /dev/sdg: 750.2 GB, 750156374016 bytes
255 heads, 63 sectors/track, 91201 cylinders, total 1465149168 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000
Device Boot Start End Blocks Id System
/dev/sdg1 63 1465144064 732572001 fd Linux raid autodetect
-----------------------
N: Jon Hardcastle
E: Jon@eHardcastle.com
'There comes a time when you look into the mirror, and you realise that what you see is all that you will ever be. Then you accept it, or you kill yourself. Or you stop looking into mirrors... :)'
***********
Please note, I am phasing out jd_hardcastle AT yahoo.com and replacing it with jon AT eHardcastle.com
***********
-----------------------
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: Is this likely to cause me problems?
am 21.09.2010 23:15:17 von John Robinson
On 21/09/2010 21:33, Jon Hardcastle wrote:
> I am finally replacing an old and now failed drive with a new one.
>
> I normally create a partition the size of the entire disk and add that but whilst checking the sizes marry up i noticed that is an odity...
>
> Below is an fdisk dump of all the drives in my RAID6 array
>
> sdc---
> /dev/sdc1 2048 1953525167 976761560 fd Linux raid autodetect
> ---
> Seems to be different to sda say which is also '1TB'
>
> sda---
> /dev/sda1 63 1953520064 976760001 fd Linux raid autodetect
> ---
>
> Now i read somewhere that the sizes flucuate but as some core value remains the same can anyone confirm if this is the case?
>
> I am reluctant to add to my array until i know for sure...
Looks like you've used a different partition tool on the new disc than
you used on the old ones - old ones started the first partition at the
beginning of cylinder 1, new ones like to start partitions at 1MB so
they're aligned on 4K sector boundaries and SSDs' erase group boundaries
etc. You could duplicate the original partition table like this:
sfdisk -d /dev/older-disc | sfdisk /dev/new-disc
But it wouldn't cause you any problems, because the new partition is
bigger than the old one, despite starting a couple of thousand sectors
later. This in itself is odd - how did you come to not use the last
chunk of your original discs?
Cheers,
John.
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: Is this likely to cause me problems?
am 21.09.2010 23:18:12 von Jon Hardcastle
--- On Tue, 21/9/10, John Robinson wro=
te:
> From: John Robinson
> Subject: Re: Is this likely to cause me problems?
> To: Jon@eHardcastle.com
> Cc: linux-raid@vger.kernel.org
> Date: Tuesday, 21 September, 2010, 22:15
> On 21/09/2010 21:33, Jon Hardcastle
> wrote:
> > I am finally replacing an old and now failed drive
> with a new one.
> >=20
> > I normally create a partition the size of the entire
> disk and add that but whilst checking the sizes marry up i
> noticed that is an odity...
> >=20
> > Below is an fdisk dump of all the drives in my RAID6
> array
> >=20
> > sdc---
> > /dev/sdc1=A0 =A0 =A0 =A0 =A0 =A0
> 2048=A0
> 1953525167 =A0976761560 =A0fd=A0
> Linux raid autodetect
> > ---
> > Seems to be different to sda say which is also '1TB'
> >=20
> > sda---
> > /dev/sda1=A0 =A0 =A0 =A0 =A0 =A0
> =A0 63=A0
> 1953520064 =A0976760001 =A0fd=A0
> Linux raid autodetect
> > ---
> >=20
> > Now i read somewhere that the sizes flucuate but as
> some core value remains the same can anyone confirm if this
> is the case?
> >=20
> > I am reluctant to add to my array until i know for
> sure...
>=20
> Looks like you've used a different partition tool on the
> new disc than you used on the old ones - old ones started
> the first partition at the beginning of cylinder 1, new ones
> like to start partitions at 1MB so they're aligned on 4K
> sector boundaries and SSDs' erase group boundaries etc. You
> could duplicate the original partition table like this:
>=20
> sfdisk -d /dev/older-disc | sfdisk /dev/new-disc
>=20
> But it wouldn't cause you any problems, because the new
> partition is bigger than the old one, despite starting a
> couple of thousand sectors later. This in itself is odd -
> how did you come to not use the last chunk of your original
> discs?
>=20
> Cheers,
>=20
> John.
>=20
> --
I used fdisk in all cases.. on the same machine.. so unless fdisk has c=
hanged?
primary... 1 partition.. default start and end.
and what do you mean about not using the last chunk of old disc?
Thank you!
=20
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" i=
n
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: Is this likely to cause me problems?
am 22.09.2010 00:34:26 von John Robinson
On 21/09/2010 22:18, Jon Hardcastle wrote:
> --- On Tue, 21/9/10, John Robinson wrote:
>
>> From: John Robinson
>> Subject: Re: Is this likely to cause me problems?
>> To: Jon@eHardcastle.com
>> Cc: linux-raid@vger.kernel.org
>> Date: Tuesday, 21 September, 2010, 22:15
>> On 21/09/2010 21:33, Jon Hardcastle
>> wrote:
>>> I am finally replacing an old and now failed drive
>> with a new one.
>>>
>>> I normally create a partition the size of the entire
>> disk and add that but whilst checking the sizes marry up i
>> noticed that is an odity...
>>>
>>> Below is an fdisk dump of all the drives in my RAID6
>> array
>>>
>>> sdc---
>>> /dev/sdc1
>> 2048
>> 1953525167 976761560 fd
>> Linux raid autodetect
>>> ---
>>> Seems to be different to sda say which is also '1TB'
>>>
>>> sda---
>>> /dev/sda1
>> 63
>> 1953520064 976760001 fd
>> Linux raid autodetect
>>> ---
>>>
>>> Now i read somewhere that the sizes flucuate but as
>> some core value remains the same can anyone confirm if this
>> is the case?
>>>
>>> I am reluctant to add to my array until i know for
>> sure...
>>
>> Looks like you've used a different partition tool on the
>> new disc than you used on the old ones - old ones started
>> the first partition at the beginning of cylinder 1, new ones
>> like to start partitions at 1MB so they're aligned on 4K
>> sector boundaries and SSDs' erase group boundaries etc. You
>> could duplicate the original partition table like this:
>>
>> sfdisk -d /dev/older-disc | sfdisk /dev/new-disc
>>
>> But it wouldn't cause you any problems, because the new
>> partition is bigger than the old one, despite starting a
>> couple of thousand sectors later. This in itself is odd -
>> how did you come to not use the last chunk of your original
>> discs?
>>
>> Cheers,
>>
>> John.
>>
>> --
>
> I used fdisk in all cases.. on the same machine.. so unless fdisk has changed?
May have done. Certainly my util-linux from CentOS 5 is newer than the
last version of util-linux on freshmeat.net and kernel.org. Peeking at
the source code, it looks like Red Hat have been patching util-linux
themselves for almost 5 years.
> primary... 1 partition.. default start and end.
>
> and what do you mean about not using the last chunk of old disc?
Your sda has 1953525168 sectors but your partition ends at sector
1953520064, 5104 sectors short of the end of the disc. This may be
related to the possible bug somebody complains about on freshmeat.net
whereby fdisk gets the last cylinder wrong. I just checked, on my 1TB
discs I have the same end sector as you so I guess the fdisk I had when
I built my array was the same as yours when you built yours.
Cheers,
John.
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: Is this likely to cause me problems?
am 22.09.2010 08:42:24 von Jon Hardcastle
--- On Tue, 21/9/10, John Robinson wro=
te:
> From: John Robinson
> Subject: Re: Is this likely to cause me problems?
> To: Jon@eHardcastle.com
> Cc: linux-raid@vger.kernel.org
> Date: Tuesday, 21 September, 2010, 22:15
> On 21/09/2010 21:33, Jon Hardcastle
> wrote:
> > I am finally replacing an old and now failed drive
> with a new one.
> >=20
> > I normally create a partition the size of the entire
> disk and add that but whilst checking the sizes marry up i
> noticed that is an odity...
> >=20
> > Below is an fdisk dump of all the drives in my RAID6
> array
> >=20
> > sdc---
> > /dev/sdc1=A0 =A0 =A0 =A0 =A0 =A0
> 2048=A0
> 1953525167 =A0976761560 =A0fd=A0
> Linux raid autodetect
> > ---
> > Seems to be different to sda say which is also '1TB'
> >=20
> > sda---
> > /dev/sda1=A0 =A0 =A0 =A0 =A0 =A0
> =A0 63=A0
> 1953520064 =A0976760001 =A0fd=A0
> Linux raid autodetect
> > ---
> >=20
> > Now i read somewhere that the sizes flucuate but as
> some core value remains the same can anyone confirm if this
> is the case?
> >=20
> > I am reluctant to add to my array until i know for
> sure...
>=20
> Looks like you've used a different partition tool on the
> new disc than you used on the old ones - old ones started
> the first partition at the beginning of cylinder 1, new ones
> like to start partitions at 1MB so they're aligned on 4K
> sector boundaries and SSDs' erase group boundaries etc. You
> could duplicate the original partition table like this:
>=20
> sfdisk -d /dev/older-disc | sfdisk /dev/new-disc
>=20
> But it wouldn't cause you any problems, because the new
> partition is bigger than the old one, despite starting a
> couple of thousand sectors later. This in itself is odd -
> how did you come to not use the last chunk of your original
> discs?
>=20
> Cheers,
>=20
> John.
>=20
Ok, Thank you.
So do you have any recommendations? I would like to 'trust' the new ver=
sion of fdisk but I can not risk torpedoing myself. I have 2 more drive=
s I need to 'phase out' at some point; but they will liklely be with 1.=
5TB drives.
My gut tells me that i should whilst I have other drives the same size =
use the same paratermeters... then when i have a bigger drive that is d=
efinately not going to cause any size issues let fdisk do its magic.
So following that premsis is there any down side to copy the partition =
table off another drive?
=20
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" i=
n
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: Is this likely to cause me problems?
am 22.09.2010 11:09:04 von Tim Small
On 21/09/10 22:07, Jon Hardcastle wrote:
> Are you sure this is the issue?
Pretty sure.
> the number of blocks is different in both measurements see below..
>
Yes - differing CHS-compatible geometry will do this, because
CHS-compatible partitions will start/end on the fake "cylinder"
boundaries. So you have different amounts of unnecessary wastage at
both the start and the end when using different number of pretend
cylinders, heads, and sectors per track...
If there's nothing on the disk yet, then surely you haven't got anything
to lose by telling fdisk to use different CHS layouts (using the command
line switches) anyway, or just ignoring CHS entirely and using the whole
disk - and like I said it's highly unlikely anything on your system ever
does anything with CHS block addressing anyway - Linux uses LBA
addressing exclusively, and so do its bootloaders.
Tim.
--
South East Open Source Solutions Limited
Registered in England and Wales with company number 06134732.
Registered Office: 2 Powell Gardens, Redhill, Surrey, RH1 1TQ
VAT number: 900 6633 53 http://seoss.co.uk/ +44-(0)1273-808309
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: Is this likely to cause me problems?
am 22.09.2010 13:25:35 von John Robinson
On 22/09/2010 07:42, Jon Hardcastle wrote:
[...]
> So do you have any recommendations? I would like to 'trust' the new version of fdisk but I can not risk torpedoing myself. I have 2 more drives I need to 'phase out' at some point; but they will liklely be with 1.5TB drives.
The new layout you've got is meant for SSDs and drives with 4K sectors;
the old layout is fine for you.
> My gut tells me that i should whilst I have other drives the same size use the same paratermeters... then when i have a bigger drive that is definately not going to cause any size issues let fdisk do its magic.
>
> So following that premsis is there any down side to copy the partition table off another drive?
I can't think of one. For backup, if I remember correctly Doug Ledford's
hot-swap auto-rebuilding onto virgin drives work was going to create
partition tables by copying them from existing drives (or by having the
user copy the required partition table from a drive in advance).
Cheers,
John.
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: Is this likely to cause me problems?
am 22.09.2010 16:38:13 von Jon Hardcastle
--- On Wed, 22/9/10, John Robinson wrote:
> From: John Robinson
> Subject: Re: Is this likely to cause me problems?
> To: Jon@eHardcastle.com
> Cc: linux-raid@vger.kernel.org
> Date: Wednesday, 22 September, 2010, 12:25
> On 22/09/2010 07:42, Jon Hardcastle
> wrote:
> [...]
> > So do you have any recommendations? I would like to
> 'trust' the new version of fdisk but I can not risk
> torpedoing myself. I have 2 more drives I need to 'phase
> out' at some point; but they will liklely be with 1.5TB
> drives.
>
> The new layout you've got is meant for SSDs and drives with
> 4K sectors; the old layout is fine for you.
>
> > My gut tells me that i should whilst I have other
> drives the same size use the same paratermeters... then when
> i have a bigger drive that is definately not going to cause
> any size issues let fdisk do its magic.
> >
> > So following that premsis is there any down side to
> copy the partition table off another drive?
>
> I can't think of one. For backup, if I remember correctly
> Doug Ledford's hot-swap auto-rebuilding onto virgin drives
> work was going to create partition tables by copying them
> from existing drives (or by having the user copy the
> required partition table from a drive in advance).
>
> Cheers,
>
> John.
>
Hi,
Thanks for your help. I have been doing some background reading and am concinving myself to leave the boundaries as they are as it appears there is performance gains to be had? Assuming this is true as long as the parition size is LARGER than the other 1TB paritions I should be ok, right?
Device Boot Start End Blocks
/dev/sda1 63 1953520064 976760001
/dev/sdc1 2048 1953525167 976761560
If i subtract Start from End
sda = 1953520064 - 63 = 1953520001
sdc = 1953525167 - 2048 = 1953523119 (3118 larger than sda)
as long as sdc is larger which it is by 3118 I should be ok right?
I am even thinking about individually removeing my drives from the array and letting fdisk use its new calculations for the existing drives. I could do with better performance!
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: Is this likely to cause me problems?
am 23.09.2010 15:14:35 von John Robinson
On 22/09/2010 15:38, Jon Hardcastle wrote:
[...]
> Thanks for your help. I have been doing some background reading and am concinving myself to leave the boundaries as they are as it appears there is performance gains to be had? Assuming this is true as long as the parition size is LARGER than the other 1TB paritions I should be ok, right?
>
> Device Boot Start End Blocks
> /dev/sda1 63 1953520064 976760001
> /dev/sdc1 2048 1953525167 976761560
>
> If i subtract Start from End
> sda = 1953520064 - 63 = 1953520001
> sdc = 1953525167 - 2048 = 1953523119 (3118 larger than sda)
>
> as long as sdc is larger which it is by 3118 I should be ok right?
>
> I am even thinking about individually removeing my drives from the array and letting fdisk use its new calculations for the existing drives. I could do with better performance!
Don't do that. There is no performance benefit from aligning your
partitions. There would be a performance benefit to making LVM align
itself correctly over md RAID stripes, and the filesystem over LVM or md
RAID, but there is no performance benefit from aligning md RAID over
partitions, *unless* you have 4K sector drives or SSD.
Honestly you are better off duplicating your original partition table
onto your new drive so all your partitions are the same, mostly so there
can't be any more confusion later on.
Cheers,
John.
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html