Bringing up dart: Difference between revisions

From Wildsong
Jump to navigationJump to search
Brian Wilson (talk | contribs)
Brian Wilson (talk | contribs)
Line 27: Line 27:


Volume groups
Volume groups
 
vg_mirror
ROOT
vg_workspace
WORKSPACE
vg_raid
VAR
vg_bulk
HOME
SWAP


Logical volumes
Logical volumes
 
lv_root      /
  /
  lv_workspace /workspace
  /var
  lv_var      /var
  /workspace
  lv_home      /home
  /home
  lv_swap      SWAP


Neutron drive: Build a broken mirror, add second drive later
Neutron drive: Build a broken mirror, add second drive later


  parted
  parted /dev/sda
  mkfs.ext4 /dev/sda1
  mkfs.ext4 /dev/sda1
  mdadm --create --verbose /dev/md0 --level=mirror --raid-devices=1 /dev/sda2
  mdadm --create --verbose /dev/md0 --level=mirror --raid-devices=1 /dev/sda2


  # put RAID1 into LVM2
  # put RAID1 into LVM2
  pg /dev/md0
  pvcreate /dev/md0
pvdisplay
vgcreate vg_mirror /dev/md0


  # later, set up second drive
  # later, set up second drive as the mirror
  parted
  parted /dev/sdb
  mkfs.ext4 /dev/sdb1
  mkfs.ext4 /dev/sdb1
  mdadm --add --verbose /dev/md0 /dev/sdb2
  mdadm --add --verbose /dev/md0 /dev/sdb2
 
  # the mirror should automatically be added to LVM since it's part of /dev/md0
  pg
vg
lv


Revo drive: completely managed under LVM2
Revo drive: completely managed under LVM2


  pg /dev/sd??
  pvcreate /dev/sdc1 /dev/sdd1
  vg add
  vgcreate vg_workspace /dev/sdc1 /dev/sdd1
lv


3Ware RAID5: completely managed under LVM2
3Ware RAID5: completely managed under LVM2


  pg /dev/sd??
  pvcreate /dev/sd
vg add
  vgcreate vg_raid
lv add
   


Black drive: completely managed under LVM2
Black drive: completely managed under LVM2
pvcreate /dev/sd
vgcreate vg_bulk


  parted set up one LVM partitions
  parted set up one LVM partitions

Revision as of 02:00, 2 December 2012

Hardware

  • SuperMicro SC742 4U server case
  • SuperMicro X9SRA motherboard
  • Intel Xeon E5-2650 Sandy Bridge-EP 2.0GHz (2.8GHz Turbo Boost) LGA 2011 95W 8-Core Server Processor
  • SUPERMICRO SNK-P0050AP4 Heatsink for Supermicro X9DR3-F Motherboard
  • 32GB ECC registered RAM (4 x 8GB)
  • Four 15K 150GB drives HITACHI HUS153014VL
  • Corsair Neutron Series GTX CSSD-N240GBGTX-BK 2.5" 240GB SATA III Internal Solid State Drive (SSD)
  • 3Ware 9690SA-4I SATA/SAS RAID controller

Drive Config

Hardware

Neutron SSD 240
Neutron SSD 240
Revo 60
Revo 60
3Ware 435 (4 x Hitachi 150GB in RAID 5)
WDC Black 1000
2 x Neutrons => partitioned for boot and md0 RAID 1 = operating system
2 x Revos => md1 RAID 0 = fast workspace
3ware RAID => /var and other fast data storage
Black = home, bulky data, swap

Volume groups

vg_mirror
vg_workspace
vg_raid
vg_bulk

Logical volumes

lv_root      /
lv_workspace /workspace
lv_var       /var
lv_home      /home
lv_swap      SWAP

Neutron drive: Build a broken mirror, add second drive later

parted /dev/sda
mkfs.ext4 /dev/sda1
mdadm --create --verbose /dev/md0 --level=mirror --raid-devices=1 /dev/sda2
# put RAID1 into LVM2
pvcreate /dev/md0
pvdisplay
vgcreate vg_mirror /dev/md0
# later, set up second drive as the mirror
parted /dev/sdb
mkfs.ext4 /dev/sdb1
mdadm --add --verbose /dev/md0 /dev/sdb2
# the mirror should automatically be added to LVM since it's part of /dev/md0

Revo drive: completely managed under LVM2

pvcreate /dev/sdc1 /dev/sdd1
vgcreate vg_workspace /dev/sdc1 /dev/sdd1

3Ware RAID5: completely managed under LVM2

pvcreate /dev/sd
vgcreate vg_raid

Black drive: completely managed under LVM2

pvcreate /dev/sd
vgcreate vg_bulk
parted set up one LVM partitions
mark partition type as 8e = LVM
pg /dev/sd??

CentOS install

Install minimal system from DVD
vi /etc/sysconfig/network-scripts/ifcfg-eth0
vi /etc/resolv.conf
service network restart
yum -y install emacs
yum -y install openssh-clients
yum -y install gcc unzip
yum -y install hdparm
yum -y install httpd
yum update
reboot

Basic software install

Install PostGIS 2.0

Install ArcGIS Server