Discussion:
What are the best practices for Linux partitioning & Mount points for Production systems
nk oorda
2012-03-02 09:04:55 UTC
Permalink
What are the best practices for Linux partitioning & Mount points for
Production systems



Hi

i need some suggestion for defining the partition size for my production
systems. we are going to use CentOS 6.2 (64 bit)

- Partition size
- Mount points

What i am able to get from the google search is:

/ Root File System (/bin , /sbin , /dev , /root
/usr program and source
code
/var variable data
/boot boot kernels
/tmp temp file locations
/work to do your work here "you can name it anything"
Swap

- */home* - Set option nosuid, and nodev with diskquota option
- */usr* - Set option nodev
- */tmp* - Set option nodev, nosuid, noexec option must be enabled
- /var local,nodev,nosuid


Most of the server will be running
- Apache
-Tomcat
-SOLR

and few of them would be running MySQL as data base.


what is concern is that one of the developer accidentally deleted the /usr
files with sudo access. if somehow i can protect the core system from the
developers mistake that would be really good.

Thanks in advance for help.
Tethys
2012-03-02 09:45:34 UTC
Permalink
Post by nk oorda
What are the best practices for Linux partitioning & Mount points for
Production systems
I generally go for:

- /boot (ext3, about 1GB)
- /boot2 (ext3, about 1GB)
- LVM for the rest of the "disk" (which is usually an array
rather than a single disk)

The two /boot filesystems allow distribution upgrades with rollback.
LVM gives you the flexibility to do what you want with the rest.
It's less important than it used to be, and there's an argument for
just sticking the rest of the OS on a single filesystem, but personally
I still go for separate /usr, /tmp and /var. The latter two prevent the
system falling over as badly when a rogue process spams the disk with
crap, and I have a read only /usr (which I haven't managed to achieve
at boot time with bind mounts, despite the claims accompanying the
misguided removal of support for a separate /usr in recent Fedora
releases).

Tet
--
Gllug mailing list - ***@gllug.org.uk
http://lists.gllug.org.uk/mailman/listinfo/gllug
Andrew Farnsworth
2012-03-02 09:53:47 UTC
Permalink
I'll Second this with one caveat. If you are running an active system that
is generating a lot of logs (read production public facing web/app server)
then you might want a separate partition for /var/log and/or any other log
directories that are in non-standard locations. My experience is that
invariably you will end up wanting more space for the logs or to tightly
monitor disk space used by the logs and this makes both very simple.

Andy
Post by Tethys
Post by nk oorda
What are the best practices for Linux partitioning & Mount points for
Production systems
- /boot (ext3, about 1GB)
- /boot2 (ext3, about 1GB)
- LVM for the rest of the "disk" (which is usually an array
rather than a single disk)
The two /boot filesystems allow distribution upgrades with rollback.
LVM gives you the flexibility to do what you want with the rest.
It's less important than it used to be, and there's an argument for
just sticking the rest of the OS on a single filesystem, but personally
I still go for separate /usr, /tmp and /var. The latter two prevent the
system falling over as badly when a rogue process spams the disk with
crap, and I have a read only /usr (which I haven't managed to achieve
at boot time with bind mounts, despite the claims accompanying the
misguided removal of support for a separate /usr in recent Fedora
releases).
Tet
--
http://lists.gllug.org.uk/mailman/listinfo/gllug
nk oorda
2012-03-02 10:05:04 UTC
Permalink
Post by Andrew Farnsworth
I'll Second this with one caveat. If you are running an active system
that is generating a lot of logs (read production public facing web/app
server) then you might want a separate partition for /var/log and/or any
other log directories that are in non-standard locations. My experience is
that invariably you will end up wanting more space for the logs or to
tightly monitor disk space used by the logs and this makes both very simple.
Andy
Thanks Andy & Tet for the reply,

I also have heard that data on the external part of a Hard Disk are faster
to access because external sectors have a bigger circumference than inner
sectors. so that also matters what sequence you are following for partition.

e.g

When setting up partitions it is hence better to begin with the one needing
faster disk access, so that they will be located on the external part of
the disk.

/boot
Swap
/
/var

suggestions >?


--N
Post by Andrew Farnsworth
Post by Tethys
Post by nk oorda
What are the best practices for Linux partitioning & Mount points for
Production systems
- /boot (ext3, about 1GB)
- /boot2 (ext3, about 1GB)
- LVM for the rest of the "disk" (which is usually an array
rather than a single disk)
The two /boot filesystems allow distribution upgrades with rollback.
LVM gives you the flexibility to do what you want with the rest.
It's less important than it used to be, and there's an argument for
just sticking the rest of the OS on a single filesystem, but personally
I still go for separate /usr, /tmp and /var. The latter two prevent the
system falling over as badly when a rogue process spams the disk with
crap, and I have a read only /usr (which I haven't managed to achieve
at boot time with bind mounts, despite the claims accompanying the
misguided removal of support for a separate /usr in recent Fedora
releases).
Tet
--
http://lists.gllug.org.uk/mailman/listinfo/gllug
--
http://lists.gllug.org.uk/mailman/listinfo/gllug
Andrew Farnsworth
2012-03-02 10:14:43 UTC
Permalink
I have not noticed a significant performance difference in disk access
speeds relating to the external vs internal tracks on a platter.

If you are this concerned about throughput you would be better off adding
and SSD to your system and placing the most highly utilized files there.
For example, placing a DB on the SSD can show dramatic improvements in
performance, just be aware of the limited life span of an SSD and keep your
backups current.

Andy
Post by nk oorda
Post by Andrew Farnsworth
I'll Second this with one caveat. If you are running an active system
that is generating a lot of logs (read production public facing web/app
server) then you might want a separate partition for /var/log and/or any
other log directories that are in non-standard locations. My experience is
that invariably you will end up wanting more space for the logs or to
tightly monitor disk space used by the logs and this makes both very simple.
Andy
Thanks Andy & Tet for the reply,
I also have heard that data on the external part of a Hard Disk are faster
to access because external sectors have a bigger circumference than inner
sectors. so that also matters what sequence you are following for partition.
e.g
When setting up partitions it is hence better to begin with the one
needing faster disk access, so that they will be located on the external
part of the disk.
/boot
Swap
/
/var
suggestions >?
--N
Post by Andrew Farnsworth
Post by Tethys
Post by nk oorda
What are the best practices for Linux partitioning & Mount points for
Production systems
- /boot (ext3, about 1GB)
- /boot2 (ext3, about 1GB)
- LVM for the rest of the "disk" (which is usually an array
rather than a single disk)
The two /boot filesystems allow distribution upgrades with rollback.
LVM gives you the flexibility to do what you want with the rest.
It's less important than it used to be, and there's an argument for
just sticking the rest of the OS on a single filesystem, but personally
I still go for separate /usr, /tmp and /var. The latter two prevent the
system falling over as badly when a rogue process spams the disk with
crap, and I have a read only /usr (which I haven't managed to achieve
at boot time with bind mounts, despite the claims accompanying the
misguided removal of support for a separate /usr in recent Fedora
releases).
Tet
--
http://lists.gllug.org.uk/mailman/listinfo/gllug
--
http://lists.gllug.org.uk/mailman/listinfo/gllug
--
http://lists.gllug.org.uk/mailman/listinfo/gllug
John Hearns
2012-03-02 10:35:48 UTC
Permalink
nk.oorda - this stuff about the external sectors of a hard disk is all
well and good.
However, in these days you should not be planning partitions on disk
to put the most heavily accessed data there.

For a production system, specify a mirrored pair off system disks for
the / /usr /var (etc. etc.) partitions

Put your /work space on a SEPARATE set of RAID disks, on a RAID controller.
Remember - there are memory buffers in RAID controllers.
Either internal to the box, or if you have big data requirements on an
external array.
And as has been said above consider solid state drives.
Post by Andrew Farnsworth
I have not noticed a significant performance difference in disk access
speeds relating to the external vs internal tracks on a platter.
If you are this concerned about throughput you would be better off adding
and SSD to your system and placing the most highly utilized files there.
For example, placing a DB on the SSD can show dramatic improvements in
performance, just be aware of the limited life span of an SSD and keep your
backups current.
Andy
On Fri, Mar 2, 2012 at 3:23 PM, Andrew Farnsworth
Post by Andrew Farnsworth
I'll Second this with one caveat. If you are running an active system
that is generating a lot of logs (read production public facing web/app
server) then you might want a separate partition for /var/log and/or any
other log directories that are in non-standard locations. My experience is
that invariably you will end up wanting more space for the logs or to
tightly monitor disk space used by the logs and this makes both very simple.
Andy
Thanks Andy & Tet for the reply,
I also have heard that data on the external part of a Hard Disk are faster
to access because external sectors have a bigger circumference than inner
sectors. so that also matters what sequence you are following for partition.
e.g
When setting up partitions it is hence better to begin with the one
needing faster disk access, so that they will be located on the external
part of the disk.
/boot
Swap
/
/var
suggestions >?
--N
Post by Andrew Farnsworth
Post by Tethys
Post by nk oorda
What are the best practices for Linux partitioning & Mount points for
Production systems
- /boot (ext3, about 1GB)
- /boot2 (ext3, about 1GB)
- LVM for the rest of the "disk" (which is usually an array
rather than a single disk)
The two /boot filesystems allow distribution upgrades with rollback.
LVM gives you the flexibility to do what you want with the rest.
It's less important than it used to be, and there's an argument for
just sticking the rest of the OS on a single filesystem, but personally
I still go for separate /usr, /tmp and /var. The latter two prevent the
system falling over as badly when a rogue process spams the disk with
crap, and I have a read only /usr (which I haven't managed to achieve
at boot time with bind mounts, despite the claims accompanying the
misguided removal of support for a separate /usr in recent Fedora
releases).
Tet
--
http://lists.gllug.org.uk/mailman/listinfo/gllug
--
http://lists.gllug.org.uk/mailman/listinfo/gllug
--
http://lists.gllug.org.uk/mailman/listinfo/gllug
--
Gllug mailing list - ***@gllug.org.uk
http://lists.gllug.org.uk/mailman/listinfo/gllug
nk oorda
2012-03-02 10:42:15 UTC
Permalink
Post by John Hearns
nk.oorda - this stuff about the external sectors of a hard disk is all
well and good.
However, in these days you should not be planning partitions on disk
to put the most heavily accessed data there.
For a production system, specify a mirrored pair off system disks for
the / /usr /var (etc. etc.) partitions
Put your /work space on a SEPARATE set of RAID disks, on a RAID controller.
Remember - there are memory buffers in RAID controllers.
Either internal to the box, or if you have big data requirements on an
external array.
And as has been said above consider solid state drives.
Thanks John
yes we do have SSD disk and /work is mounted on that. we are really
getting good performance number for our indexing server which is running
on SOLR.

so is
/
/boot
/usr
/var
/work

is a good scheme to go with right.....i am also interesting to know about
the mount arguments.
thanks again.

--N
Post by John Hearns
Post by Andrew Farnsworth
I have not noticed a significant performance difference in disk access
speeds relating to the external vs internal tracks on a platter.
If you are this concerned about throughput you would be better off adding
and SSD to your system and placing the most highly utilized files there.
For example, placing a DB on the SSD can show dramatic improvements in
performance, just be aware of the limited life span of an SSD and keep
your
Post by Andrew Farnsworth
backups current.
Andy
On Fri, Mar 2, 2012 at 3:23 PM, Andrew Farnsworth
Post by Andrew Farnsworth
I'll Second this with one caveat. If you are running an active system
that is generating a lot of logs (read production public facing web/app
server) then you might want a separate partition for /var/log and/or
any
Post by Andrew Farnsworth
Post by Andrew Farnsworth
other log directories that are in non-standard locations. My
experience
Post by Andrew Farnsworth
Post by Andrew Farnsworth
is
that invariably you will end up wanting more space for the logs or to
tightly monitor disk space used by the logs and this makes both very simple.
Andy
Thanks Andy & Tet for the reply,
I also have heard that data on the external part of a Hard Disk are
faster
Post by Andrew Farnsworth
to access because external sectors have a bigger circumference than
inner
Post by Andrew Farnsworth
sectors. so that also matters what sequence you are following for partition.
e.g
When setting up partitions it is hence better to begin with the one
needing faster disk access, so that they will be located on the external
part of the disk.
/boot
Swap
/
/var
suggestions >?
--N
Post by Andrew Farnsworth
Post by Tethys
Post by nk oorda
What are the best practices for Linux partitioning & Mount points for
Production systems
- /boot (ext3, about 1GB)
- /boot2 (ext3, about 1GB)
- LVM for the rest of the "disk" (which is usually an array
rather than a single disk)
The two /boot filesystems allow distribution upgrades with rollback.
LVM gives you the flexibility to do what you want with the rest.
It's less important than it used to be, and there's an argument for
just sticking the rest of the OS on a single filesystem, but
personally
Post by Andrew Farnsworth
Post by Andrew Farnsworth
Post by Tethys
I still go for separate /usr, /tmp and /var. The latter two prevent
the
Post by Andrew Farnsworth
Post by Andrew Farnsworth
Post by Tethys
system falling over as badly when a rogue process spams the disk with
crap, and I have a read only /usr (which I haven't managed to achieve
at boot time with bind mounts, despite the claims accompanying the
misguided removal of support for a separate /usr in recent Fedora
releases).
Tet
--
http://lists.gllug.org.uk/mailman/listinfo/gllug
--
http://lists.gllug.org.uk/mailman/listinfo/gllug
--
http://lists.gllug.org.uk/mailman/listinfo/gllug
--
http://lists.gllug.org.uk/mailman/listinfo/gllug
Nix
2012-03-06 13:33:12 UTC
Permalink
Post by Andrew Farnsworth
I have not noticed a significant performance difference in disk access
speeds relating to the external vs internal tracks on a platter.
There is a significant difference, often as much as a factor of two.
This is inevitable: the outside tracks of the disk have a higher linear
velocity than the inside, so given a constant data density there's more
room to cram in more sectors. (It is possible to reduce the data density
and use a constant sectors/track across the whole disk, but that went
out in the early 90s: it costs too much space.)
--
NULL && (void)
--
Gllug mailing list - ***@gllug.org.uk
http://lists.gllug.org.uk/mailman/listinfo/gllug
Alain Williams
2012-03-06 13:54:40 UTC
Permalink
Post by Nix
Post by Andrew Farnsworth
I have not noticed a significant performance difference in disk access
speeds relating to the external vs internal tracks on a platter.
There is a significant difference, often as much as a factor of two.
This is inevitable: the outside tracks of the disk have a higher linear
velocity than the inside, so given a constant data density there's more
room to cram in more sectors. (It is possible to reduce the data density
and use a constant sectors/track across the whole disk, but that went
out in the early 90s: it costs too much space.)
Interesting ... a default install tends to put /boot in the first few cylinders,
something that is not used after boot.
Looking at a machine where the rest of the disk is given to LVM where other
partitions are found I see:
RFS 0 to 255
/tmp 256 to 767
/www 768 to 1791
swap 1792 to 2303
/var 2304 to 4303
/usr 4304 to 5327

How to organise them ? From out to in perhaps: /www /var /tmp /usr RFS swap.
Roughly in order of most used.
* The machine should never swap.
* RFS - a few files will be used a lot, but they are only read & will end up in buffer cache

Hmm, food for thought.

BTW: I assume that the low numbered extents reported by lvdisplay --maps & by fdisk means
to wards the outside of the disk.
--
Alain Williams
Linux/GNU Consultant - Mail systems, Web sites, Networking, Programmer, IT Lecturer.
+44 (0) 787 668 0256 http://www.phcomp.co.uk/
Parliament Hill Computers Ltd. Registration Information: http://www.phcomp.co.uk/contact.php
#include <std_disclaimer.h>
--
Gllug mailing list - ***@gllug.org.uk
http://lists.gllug.org.uk/mailman/listinfo/gllug
Andrew Farnsworth
2012-03-06 14:02:53 UTC
Permalink
/boot is put in the first few cylinders because it used to be that the boot
loaders could not access it past a certain point. Once disks grew larger
than this size, it became an issue. This led to the default behavior of
placing it first on the disk which let the older boot loaders load it
correctly even on a larger disk. By maintaining this practice, you make it
possible to boot off even an OLD "liveCD" or install disk set and pass
parameters that will cause your system to load from the hard drive even
after booting from the (gasp) floppy or CD.

Andy
Post by Alain Williams
Post by Nix
Post by Andrew Farnsworth
I have not noticed a significant performance difference in disk access
speeds relating to the external vs internal tracks on a platter.
There is a significant difference, often as much as a factor of two.
This is inevitable: the outside tracks of the disk have a higher linear
velocity than the inside, so given a constant data density there's more
room to cram in more sectors. (It is possible to reduce the data density
and use a constant sectors/track across the whole disk, but that went
out in the early 90s: it costs too much space.)
Interesting ... a default install tends to put /boot in the first few cylinders,
something that is not used after boot.
Looking at a machine where the rest of the disk is given to LVM where other
RFS 0 to 255
/tmp 256 to 767
/www 768 to 1791
swap 1792 to 2303
/var 2304 to 4303
/usr 4304 to 5327
How to organise them ? From out to in perhaps: /www /var /tmp /usr RFS swap.
Roughly in order of most used.
* The machine should never swap.
* RFS - a few files will be used a lot, but they are only read & will end
up in buffer cache
Hmm, food for thought.
BTW: I assume that the low numbered extents reported by lvdisplay --maps & by fdisk means
to wards the outside of the disk.
--
Alain Williams
Linux/GNU Consultant - Mail systems, Web sites, Networking, Programmer, IT Lecturer.
+44 (0) 787 668 0256 http://www.phcomp.co.uk/
http://www.phcomp.co.uk/contact.php
#include <std_disclaimer.h>
--
http://lists.gllug.org.uk/mailman/listinfo/gllug
Keith Edmunds
2012-03-06 20:09:12 UTC
Permalink
Post by Nix
Post by Andrew Farnsworth
I have not noticed a significant performance difference in disk access
speeds relating to the external vs internal tracks on a platter.
There is a significant difference, often as much as a factor of two.
I'm surprised. Do you have a source for that statement?
--
"You can have everything in life you want if you help enough other people
get what they want" - Zig Ziglar.

Who did you help today?
--
Gllug mailing list - ***@gllug.org.uk
http://lists.gllug.org.uk/mailman/listinfo/gllug
Martin A. Brooks
2012-03-06 21:05:14 UTC
Permalink
Post by Keith Edmunds
Post by Nix
Post by Andrew Farnsworth
I have not noticed a significant performance difference in disk access
speeds relating to the external vs internal tracks on a platter.
There is a significant difference, often as much as a factor of two.
I'm surprised. Do you have a source for that statement?
I don't have numbers, but I can confirm it is, or used to be, a valid
strategy. The practice is called "short stroking".

http://en.wikipedia.org/wiki/Disk-drive_performance_characteristics#Short_stroking



--
Gllug mailing list - ***@gllug.org.uk
http://lists.gllug.org.uk/mailman/listinfo/gllug
Nix
2012-03-07 00:34:23 UTC
Permalink
Post by Keith Edmunds
Post by Nix
Post by Andrew Farnsworth
I have not noticed a significant performance difference in disk access
speeds relating to the external vs internal tracks on a platter.
There is a significant difference, often as much as a factor of two.
I'm surprised. Do you have a source for that statement?
My own benchmarks on my 2009-vintage desktop and server, during initial
burnin. One machine showed 50Mb/s on inner tracks, 95Mb/s on outer
tracks (damn nearly a factor of two): the other (an Areca RAID-5 array)
showed 160Mb/s on inner tracks, 250Mb/s on outer tracks.

I wish I could remember the tool I used. It wrote logfiles giving the
read rate against distance into the device or file being used (normally
a block device, of course) and could then postprocess this to give nice
ASCII-art graphs... it was neither bonnie++ nor ffsb, but I can't
remember what it was. How annoying.
--
NULL && (void)
--
Gllug mailing list - ***@gllug.org.uk
http://lists.gllug.org.uk/mailman/listinfo/gllug
John Hearns
2012-03-07 14:56:31 UTC
Permalink
Post by Nix
Post by Nix
I wish I could remember the tool I used. It wrote logfiles giving the
read rate against distance into the device or file being used (normally
a block device, of course) and could then postprocess this to give nice
ASCII-art graphs... it was neither bonnie++ nor ffsb, but I can't
remember what it was. How annoying.
Doesn't match your description, but another tool I know is iozone
http://www.iozone.org/
--
Gllug mailing list - ***@gllug.org.uk
http://lists.gllug.org.uk/mailman/listinfo/gllug
Nix
2012-03-07 00:36:20 UTC
Permalink
Post by Keith Edmunds
Post by Nix
Post by Andrew Farnsworth
I have not noticed a significant performance difference in disk access
speeds relating to the external vs internal tracks on a platter.
There is a significant difference, often as much as a factor of two.
I'm surprised. Do you have a source for that statement?
Hang on. I think there may have been a misreading here. I (and the OP, I
think) are talking about linear read bandwidth, not seek time. Seek time
is roughly constant per cylinder across disks, IIRC: I have never
measured this.
--
NULL && (void)
--
Gllug mailing list - ***@gllug.org.uk
http://lists.gllug.org.uk/mailman/listinfo/gllug
James Courtier-Dutton
2012-03-02 12:21:03 UTC
Permalink
Post by Tethys
Post by nk oorda
What are the best practices for Linux partitioning & Mount points for
Production systems
- /boot (ext3, about 1GB)
- /boot2 (ext3, about 1GB)
- LVM for the rest of the "disk" (which is usually an array
 rather than a single disk)
The two /boot filesystems allow distribution upgrades with rollback.
As a sub note, if you need the rollback feature, you only need the
/boot2 on Redhat based installations.
Debian and Ubuntu do not need it.
This is because during an upgrade, Redhat deletes the old kernel and
replaces it with a new one.
Ubuntu just adds the new kernel to the grub menu, without deleting the
old kernel.
In Ubuntu you then manually uninstall the old kernel once the new one
is up and running.
I think the Ubuntu/Debian method is safer than the Redhat method.
--
Gllug mailing list - ***@gllug.org.uk
http://lists.gllug.org.uk/mailman/listinfo/gllug
Tethys .
2012-03-02 12:32:44 UTC
Permalink
On Fri, Mar 2, 2012 at 12:21 PM, James Courtier-Dutton
Post by James Courtier-Dutton
As a sub note, if you need the rollback feature, you only need the
/boot2 on Redhat based installations.
Debian and Ubuntu do not need it.
Unless you can point to specific examples of how to roll back to a
previous distribution using Debian/Ubuntu, I'm going to accuse you of
spreading FUD. A distribution is more than a kernel. I don't upgrade
distributions in place, I install a new one in parallel, hence the
need for a separate /boot. If it doesn't work, then the complete old
distribution is still there, and rolling back to it takes 60 seconds.

Note that there has been some work on rolling back an in place
upgrade, but AFAIK the bulk of it has been done in the Red Hat world.
If there's been a Debianish equivalent, I don't know about them and
I'd appreciate some links. But I'm still wary of such things. My
solution works well.

Tet
--
"Java is a DSL for taking large XML files and converting them to stack
traces" -- Bulat Shakirzyanov
--
Gllug mailing list - ***@gllug.org.uk
http://lists.gllug.org.uk/mailman/listinfo/gllug
James Courtier-Dutton
2012-03-02 12:55:08 UTC
Permalink
Post by Tethys .
On Fri, Mar 2, 2012 at 12:21 PM, James Courtier-Dutton
Post by James Courtier-Dutton
As a sub note, if you need the rollback feature, you only need the
/boot2 on Redhat based installations.
Debian and Ubuntu do not need it.
Unless you can point to specific examples of how to roll back to a
previous distribution using Debian/Ubuntu, I'm going to accuse you of
spreading FUD. A distribution is more than a kernel. I don't upgrade
distributions in place, I install a new one in parallel, hence the
need for a separate /boot. If it doesn't work, then the complete old
distribution is still there, and rolling back to it takes 60 seconds.
Note that there has been some work on rolling back an in place
upgrade, but AFAIK the bulk of it has been done in the Red Hat world.
If there's been a Debianish equivalent, I don't know about them and
I'd appreciate some links. But I'm still wary of such things. My
solution works well.
I was only referring to the kernel, not the entire distribution.
But, with Jan's comments, it seems that I was the only one seeing the
problem of Redhat deleting old/last kernels when upgrading the kernel.
--
Gllug mailing list - ***@gllug.org.uk
http://lists.gllug.org.uk/mailman/listinfo/gllug
Jan van Bergen
2012-03-02 12:32:22 UTC
Permalink
On 02/03/2012 12:21, James Courtier-Dutton wrote:
[SNIP]
Post by James Courtier-Dutton
This is because during an upgrade, Redhat deletes the old kernel and
replaces it with a new one. Ubuntu just adds the new kernel to the
grub menu, without deleting the old kernel. In Ubuntu you then
manually uninstall the old kernel once the new one is up and running.
I think the Ubuntu/Debian method is safer than the Redhat method. --
Not completely true in my experience, Red Hat keeps the last 3 kernels
in boot and if you add one, it deletes the oldest, so after an upgrade
you can still go back to the previous kernel (or even the one before
that). Pretty safe in my experience

Jan

--
Gllug mailing list - ***@gllug.org.uk
http://lists.gllug.org.uk/mailman/listinfo/gllug
James Courtier-Dutton
2012-03-02 12:50:30 UTC
Permalink
Post by Jan van Bergen
[SNIP]
Post by James Courtier-Dutton
This is because during an upgrade, Redhat deletes the old kernel and
replaces it with a new one. Ubuntu just adds the new kernel to the grub
menu, without deleting the old kernel. In Ubuntu you then manually uninstall
the old kernel once the new one is up and running. I think the Ubuntu/Debian
method is safer than the Redhat method. --
Not completely true in my experience, Red Hat keeps the last 3 kernels in
boot and if you add one, it deletes the oldest, so after an upgrade you can
still go back to the previous kernel (or even the one before that). Pretty
safe in my experience
My experience with Redhat is limited. I use mostly Debian and Ubuntu.
But as I generally know Linux, I have been called to help at work. On
some of these occasions, the person needing help has done an upgrade
and this has changed the kernel and for whatever reason the system is
now not booting. Replacing the old kernel from a backup device allowed
the system to boot again. It is because this happened quite a few
times, that I assumed that Redhat deleted the old kernel when
upgrading the kernel.
If what you say is true, then there is something wrong with the Redhat
systems I saw.
If Redhat on these systems had kept the last 3 kernels, I would not
have had to recover from the backup.
That being said, I have not received a call to help with Redhat
problems recently, so maybe the problem has been resolved generally.
For example, keeping the last 3 kernels would resolve the problem.
--
Gllug mailing list - ***@gllug.org.uk
http://lists.gllug.org.uk/mailman/listinfo/gllug
Philip Hands
2012-03-02 12:44:35 UTC
Permalink
Post by Tethys
Post by nk oorda
What are the best practices for Linux partitioning & Mount points for
Production systems
- /boot (ext3, about 1GB)
- /boot2 (ext3, about 1GB)
- LVM for the rest of the "disk" (which is usually an array
rather than a single disk)
My normal setup is /boot and / on non-LV partitions (normally on
software RAID1 across every physical disk in the machine, and ensuring
that every disk also has grub installed, and working, on it). That way
you can get somewhere towards a bootable system if any single disk
survives the disaster -- being software RAID1, in extremis you can get
it to boot with root=/dev/sda2 (say) and it'll even survive you messing
up the raid modules for the kernel.

The second boot partition is not a bad idea, although if you have grub
(in multiple copies) and keep at least one old kernel around, I doubt
you're going to break things badly enough to need a second boot very
often, and if you do, chances are you'll have let the second boot rot
somehow.

Then the rest of the disks are used for LVM, with split /usr, /var ...

Of course, if you're using hardware raid it may be less easy to do that
sort of split, but I've shied away from hardware raid since finding it
hard to get at the data if one loses the controller and doesn't have a
sufficiently similar one laying around.

Oh, and as for sizing -- with the new GPT partitions, recently I've been
setting the first partition as the BIOS boot partition, using the first
1023K, then 250-ish MB for the /boot and 750-ish for the /, so that they
and the LVM partitions all start on MB boundaries.

1GB for /boot seems a bit much to me, but perhaps that'll perhaps change
now that various people are deciding that separate /usr can only work if
you have initramfs, which may well then lead to initramfs getting rather
bigger in future. I suppose you could go for 2 512MB /boots, and if
that's ever too small you'd be able to then switch to one 1G one.

Cheers, Phil.
--
|)| Philip Hands [+44 (0)20 8530 9560] http://www.hands.com/
|-| HANDS.COM Ltd. http://www.uk.debian.org/
|(| 10 Onslow Gardens, South Woodford, London E18 1NE ENGLAND
Alain Williams
2012-03-02 15:31:06 UTC
Permalink
Post by Philip Hands
My normal setup is /boot and / on non-LV partitions (normally on
software RAID1 across every physical disk in the machine, and ensuring
that every disk also has grub installed, and working, on it). That way
you can get somewhere towards a bootable system if any single disk
survives the disaster -- being software RAID1, in extremis you can get
it to boot with root=/dev/sda2 (say) and it'll even survive you messing
up the raid modules for the kernel.
....
An interesting problem that happened to one machine that I look after recently.
The machine had had intermittent disk hardware problems (low level motherboard,
sata disk configuration issues). A new disk was purchased and it ended up with a
3 way mirror (RAID 1 - Linux kernel MD).

Later the machine would slow up for 30-60 seconds maybe once or twice a day,
eventually I traced it to sda appearing to take that time to service requests. I
tested this by removing sda from the mirror set ... no problems after a few days
and so thought 'problem solved'.

The disk was not removed from the machine.

A couple of months later I requested that the user reboot the new machine since
a new kernel had been installed .... they did but it ran the old kernel ????

Long story short: the BIOS booted from the first disk that it saw: sda. That was
no longer being updated because it was not in the mirror any more ... so
grub.conf on that disk was not updated.

Moral: remove unused hardware.
--
Alain Williams
Linux/GNU Consultant - Mail systems, Web sites, Networking, Programmer, IT Lecturer.
+44 (0) 787 668 0256 http://www.phcomp.co.uk/
Parliament Hill Computers Ltd. Registration Information: http://www.phcomp.co.uk/contact.php
#include <std_disclaimer.h>
--
Gllug mailing list - ***@gllug.org.uk
http://lists.gllug.org.uk/mailman/listinfo/gllug
Nix
2012-03-06 13:36:00 UTC
Permalink
Post by Alain Williams
Long story short: the BIOS booted from the first disk that it saw: sda. That was
no longer being updated because it was not in the mirror any more ... so
grub.conf on that disk was not updated.
Moral: remove unused hardware.
RAID spares of course require that you leave unused hardware *in* the
machine in case of failure! A nice quandary.
--
NULL && (void)
--
Gllug mailing list - ***@gllug.org.uk
http://lists.gllug.org.uk/mailman/listinfo/gllug
JLMS
2012-03-02 18:05:52 UTC
Permalink
Post by nk oorda
what is concern is that one of the developer accidentally deleted the /usr
files with sudo access. if somehow i can protect the core system from the
developers mistake that would be really good.
CentOS has the following:

- ACLs
- SELinux
- Virtualization
- LVM snapshots

using one or more of these you can constrain quite a bit what anybody
can do, give free reign to people that can be destructive and how to
recover from a mistake.

I would not allow a developer in a production server, but that is a
completely different topic.
--
Gllug mailing list - ***@gllug.org.uk
http://lists.gllug.org.uk/mailman/listinfo/gllug
t.clarke
2012-03-07 06:41:51 UTC
Permalink
Just out of curiosity:
If the read-rate on outer tracks is greater than on inner-tracks, this implies
that the outer tracks have more sectors per track than the innetr tracks.
So, I assume therefore that all disc accesses must be done by logical block
addressing rather than the 'old' way of cylinder/head/sector, with the disc
firmware 'knowing' how many tracks on each cylinder(group) ?

With regard to seek time, do discs generally re-position the heads when idle
to a default position (cyinder 0?) or stay where they are until the next
read/write? If the heads don't reposition on 'idle' then I guess it really
doesn't matter where the data is on the platter providing the most frequently
accessed stuff is close together. To that end does LVM defeat that object
on resizing partitions by having the logical partition spread across
different areas of the disc?
Oh, and is cylinder 0 on the outside or the inside?

Personally I have always worked on the basis of several smaller discs being
better than one big one, thus minimising seek times.

Tim
--
Gllug mailing list - ***@gllug.org.uk
http://lists.gllug.org.uk/mailman/listinfo/gllug
Nix
2012-03-07 16:37:45 UTC
Permalink
Post by t.clarke
If the read-rate on outer tracks is greater than on inner-tracks, this implies
that the outer tracks have more sectors per track than the innetr tracks.
Quite so. A lot more. Disks are generally divided into a number of zones
with increasingly many sectors-per-track in each (though I'd not be
surprised to find that on modern disks each zone is only a cylinder
large, last I checked they were a few dozen cylinders each.)
Post by t.clarke
So, I assume therefore that all disc accesses must be done by logical block
addressing rather than the 'old' way of cylinder/head/sector, with the disc
firmware 'knowing' how many tracks on each cylinder(group) ?
Yeah. LBA has been the only means of disk access anything actually uses
for a very long time. Modern (by which I mean anything after the early
90s) disk firmware is fairly bright, maintaining a filesystem of sorts
on the disk itself permitting failed sectors to be silently spared out
in favour of one of a quite large pool of replacements with no cost but
access time (well, if the failure is detected at write time rather than
read time the sparing is silent).

The cylinder/head/sector stuff is still maintained for the sake of old
BIOSes and old OSes using those BIOSes but is entirely fictional by now:
IIRC with ATAPI controllers the BIOS translates the c/h/s stuff into LBA
itself (though my memory of this is fuzzy: the controller may still do
the translation).
Post by t.clarke
With regard to seek time, do discs generally re-position the heads when idle
to a default position (cyinder 0?)
No way, that would be appallingly inefficient. They must stay where they
were in order to benefit from access locality patterns. (Some drives may
autopark after a period of idle time: pretty much all of them autopark
on powerdown.)
Post by t.clarke
To that end does LVM defeat that object
on resizing partitions by having the logical partition spread across
different areas of the disc?
Well, yes, but it only spreads LVs into multiple chunks if it has no
choice: normally it squeezes LVs into the first available unused space,
but you can require it to find a contiguous block and use that instead,
if you prefer slower-but-consistent access times.

Most LVs are normally contiguous because LVs aren't created and deleted
*that* often. (Nobody has ever written an LV defragmenter, though you
can do it yourself by hand with lots of calls to pvmove.)
Post by t.clarke
Oh, and is cylinder 0 on the outside or the inside?
Outside: it's fast.
Post by t.clarke
Personally I have always worked on the basis of several smaller discs being
better than one big one, thus minimising seek times.
I do the same, but so's I can RAID them and get more reliability as well
as more speed from parallel access :)
--
NULL && (void)
--
Gllug mailing list - ***@gllug.org.uk
http://lists.gllug.org.uk/mailman/listinfo/gllug
Paul Wilkins
2012-03-07 17:07:47 UTC
Permalink
Hi There,

We are representing a Global Financial services company who are looking for
a junior to mid level Applications support analyst with good Linux / unix
understanding to command level. This is a client facing technical support
role that will suit an ambitious individual keen to develop their technical
expertise and also to grow a good understanding of the FX trading business.
They are paying £30-40k

Please contact f anyone springs to mind - many thanks

Kind Regards,


Paul Wilkins
Senior Consultant


Switchboard: 020-8-354-4149
***@starfish-it.com
www.starfish-it.com

Starfish IT

-----Original Message-----
From: gllug-***@gllug.org.uk [mailto:gllug-***@gllug.org.uk] On
Behalf Of Nix
Sent: 07 March 2012 16:38
To: Greater London Linux User Group
Subject: Re: [Gllug] What are the best practices for Linux partitioning &
Post by t.clarke
If the read-rate on outer tracks is greater than on inner-tracks, this implies
that the outer tracks have more sectors per track than the innetr tracks.
Quite so. A lot more. Disks are generally divided into a number of zones
with increasingly many sectors-per-track in each (though I'd not be
surprised to find that on modern disks each zone is only a cylinder
large, last I checked they were a few dozen cylinders each.)
Post by t.clarke
So, I assume therefore that all disc accesses must be done by logical block
addressing rather than the 'old' way of cylinder/head/sector, with the disc
firmware 'knowing' how many tracks on each cylinder(group) ?
Yeah. LBA has been the only means of disk access anything actually uses
for a very long time. Modern (by which I mean anything after the early
90s) disk firmware is fairly bright, maintaining a filesystem of sorts
on the disk itself permitting failed sectors to be silently spared out
in favour of one of a quite large pool of replacements with no cost but
access time (well, if the failure is detected at write time rather than
read time the sparing is silent).

The cylinder/head/sector stuff is still maintained for the sake of old
BIOSes and old OSes using those BIOSes but is entirely fictional by now:
IIRC with ATAPI controllers the BIOS translates the c/h/s stuff into LBA
itself (though my memory of this is fuzzy: the controller may still do
the translation).
Post by t.clarke
With regard to seek time, do discs generally re-position the heads when idle
to a default position (cyinder 0?)
No way, that would be appallingly inefficient. They must stay where they
were in order to benefit from access locality patterns. (Some drives may
autopark after a period of idle time: pretty much all of them autopark
on powerdown.)
Post by t.clarke
To that end does LVM defeat that object
on resizing partitions by having the logical partition spread across
different areas of the disc?
Well, yes, but it only spreads LVs into multiple chunks if it has no
choice: normally it squeezes LVs into the first available unused space,
but you can require it to find a contiguous block and use that instead,
if you prefer slower-but-consistent access times.

Most LVs are normally contiguous because LVs aren't created and deleted
*that* often. (Nobody has ever written an LV defragmenter, though you
can do it yourself by hand with lots of calls to pvmove.)
Post by t.clarke
Oh, and is cylinder 0 on the outside or the inside?
Outside: it's fast.
Post by t.clarke
Personally I have always worked on the basis of several smaller discs being
better than one big one, thus minimising seek times.
I do the same, but so's I can RAID them and get more reliability as well
as more speed from parallel access :)
--
NULL && (void)
--
Gllug mailing list - ***@gllug.org.uk
http://lists.gllug.org.uk/mailman/listinfo/gllug

--
Gllug mailing list - ***@gllug.org.uk
http://lists.gllug.org.uk/mailman/listinfo/gllug
Tethys .
2012-03-07 18:14:48 UTC
Permalink
Post by Paul Wilkins
We are representing a Global Financial services company who are looking for
a junior to mid level Applications support analyst with good Linux / unix
understanding to command level.
Yeesh. Where do I begin? You're advertising for a technical job, and
yet are clearly so utterly technically inept that I struggle to see
why anyone would trust you to accurately represent them to a potential
employer. You haven't bothered to read the mailing list FAQ, which
clearly states:

- Job postings must have a subject starting with "VACANCY:"
- Job postings must include a location (not just "London" - you didn't
even get that far)
- No job postings from agencies
- No top posting
- Trim your messages, quoting only the relevant parts

Of course, there weren't any relevant parts to quote, because you
posted by replying to a completely unrelated post, thus displaying an
alarming level of ignorance about how email works, and breaking
threading for everyone else.

If there's a next time, please make an attempt to do better.

Tet

[1] http://www.hinterlands.org/gllugfaq (linked to from http://gllug.org.uk)
--
"Java is a DSL for taking large XML files and converting them to stack
traces" -- Bulat Shakirzyanov
--
Gllug mailing list - ***@gllug.org.uk
http://lists.gllug.org.uk/mailman/listinfo/gllug
Nix
2012-03-07 18:27:13 UTC
Permalink
Post by Tethys .
Yeesh. Where do I begin? You're advertising for a technical job, and
yet are clearly so utterly technically inept that I struggle to see
why anyone would trust you to accurately represent them to a potential
employer.
Quite. I note that my current employer does not use agencies (despite,
or perhaps because of, our hunger for clued staff). Neither does RH nor
I think Canonical. I would be unsurprised to find that SuSE (whoever
owns them today) rejects them too. That's most of the big Linux
distributors.

This sort of posting makes it clear why. Would *you* accept candidates
who'd been screened for suitability by someone who you would not
yourself employ? Surely not.

(I never used agencies on the employee side either, if merely because
they generally insist on mangling your CV into something uglier than you
can possibly imagine.)
Post by Tethys .
- Job postings must include a location (not just "London" - you didn't
even get that far)
Give him a tiny bit of credit. He had 'London' in his subject line.
(Because Kings Cross and Sutton are basically the same place and equally
easy to get to. Right?)
--
NULL && (void)
--
Gllug mailing list - ***@gllug.org.uk
http://lists.gllug.org.uk/mailman/listinfo/gllug
John Hearns
2012-03-07 18:06:36 UTC
Permalink
Post by Nix
Post by t.clarke
Post by t.clarke
Personally I have always worked on the basis of several smaller discs
being
better than one big one, thus minimising seek times.
I do the same, but so's I can RAID them and get more reliability as well
as more speed from parallel access :)
Choose no life. Choose sysadminning. Choose no career.

Choose no family. Choose a ... big computer, choose hard

disks the size of washing machines, old cars, CD ROM writers

and electrical coffee makers. Choose no sleep, high caffeine

and mental insurance. Choose fixed interest car loans. Choose

a rented shoebox. Choose no friends. Choose black jeans and

matching combat boots. Choose a swivel chair for your office

in a range of ... fabrics. Choose NNTP and wondering why

the ... you're logged on on a Sunday morning. Choose sitting

in that chair looking at mind-numbing, spirit-crushing web

sites, stuffing ... junk food into your mouth. Choose

rotting away at the end of it all, pishing your last on some

miserable newsgroup, nothing more than an embarrassment to

the selfish, fucked up lusers Gates spawned to replace the

computer-literate.

Choose your future.

Choose sysadmining
--
Gllug mailing list - ***@gllug.org.uk
http://lists.gllug.org.uk/mailman/listinfo/gllug
Nix
2012-03-07 18:27:45 UTC
Permalink
Post by John Hearns
Post by Nix
Post by t.clarke
Post by t.clarke
Personally I have always worked on the basis of several smaller discs
being
better than one big one, thus minimising seek times.
I do the same, but so's I can RAID them and get more reliability as well
as more speed from parallel access :)
Choose no life. Choose sysadminning. Choose no career.
Haven't seen this in *years*. I wish my adminspotting t-shirt hadn't
worn out...
--
NULL && (void)
--
Gllug mailing list - ***@gllug.org.uk
http://lists.gllug.org.uk/mailman/listinfo/gllug
Loading...