User Tools

Site Tools


HP Infrastructure

This is from a May 2015 email from Bob Gobeille, former project lead, that described the HP infrastructure. As of October 2015, these systems had been set up (base OS only, no actual software/services) at the Linux Foundation, so to the extent that they are still used or referenced, this may help explain what they might be doing.

VMWare hosts
We currently use 4 VMWare hosts.
All are ProLiant BL460C Gen8, 64Gb, with dual Xeon E5-2650 cpu's (8 cores)
Two of these use 2.00GHz cpu's, the other two are v2 and run 2.6GHz
We have about 300GB for our 13 current testing vm’s but also keep older distros
around for a bit.  So say 500GB.
This is a large system because it is used for much more than FOSSology.  I only
mention it to illustrate where we are coming from.  We use VMWare is because we
have it.  I would expect this to change to whatever you support.  How do you do
vm provisioning?   I explain more about our vm usage below.

Scale testing
We currently use a three server system to test FOSSology at a typical
production scale.
All are ProLiant BL460c Gen7, dual Xeon X5650 2.67GHz (6 cores) but differ in
ram and disk:

Server 1 (FOSSology server): 64Gb, 72 Gb /, 300 Gb /postgres
Server 2 (agents): 16 Gb, 72 Gb /, 4 TB /srv, but we are only currently using 1
TB on this server
Server 3 (agents): 64 Gb, 72 Gb /, 4 TB /srv, but we are only currently using 1
TB on this server

This could use vm’s instead of dedicated hardware but does use a lot of memory.
We could work with 4TB total instead of 8 and a single agent server instead of
2.  This is not a heavily used machine but it is critical for our testing since
lots of problems only are noticeable on larger systems.

Performance Regression Testing
This is a minimal dedicated system (8 GB, 100 GB /) for consistent timed tests.
The requirement is for consistency, but these tests are only run nightly and
take < 5 minutes.  So perhaps a vm that can use the whole physical machine for
5 minutes/night is all we need.

This server runs our CI (Jenkins) and checkin tests
ProLiant BL465c Gen1, 12Gb, Dual-Core AMD Opteron, 2400 MHz, 100 Gb /
Could easily be a vm.  Starts tests on git checkin.  Does install, runs tests
takes < 15mins
We are also trying to move as much as we can to github/travis.  So eventually I
would like this to go away if we can get everything done with travis.

Build server
ProLiant BL460c Gen1, 16 Gb, dual Quad-Core Intel Xeon, 2666 MHz, 100Gb /, 135Gb /home
Also could be a vm.  We do builds nightly on 13 vm’s.  It takes 1.5-2.5 hours.
Also nightly we do package install testing on all the vm’s (< 20 min), source
install testing (<15 min running all vm’s in parallel), and documentation
generation (schemaspy and doxygen) which takes around 15 min.

Demo Server
We would like to run a small fossology demo server so people can try it out.
I’m not sure what “small” means since we don’t have this today except at UNO
and they would like to move it to us.  I’m thinking a system with 16GB and
around  1-2TB for the repository.

Here were the machines created at LF. The OSes chosen and system specs were based on a conversation with Bob. Note, the “fo-DISTRO” ones correspond to the “testing VMs” on the VMWare hosts mentioned above. No systems were set up for scale testing or performance regression testing.

HOSTNAME                                   RAM           VCPU DISK           2.0G           1   3.0G           2.0G           1   3.5G           2.0G           1   2.0G           2.0G           1   2.0G           2.0G           1   3.0G   2.0G           1   2.0G   2.0G           1   2.0G    2.0G           1   2.0G    2.0G           1   2.0G    2.0G           1   2.0G    2.0G           1   2.0G          2.0G           1   3.0G          2.0G           1   3.0G          2.0G           1   5.0G          2.0G           1   5.0G          2.0G           1   3.5G          2.0G           1   5.0G        2.0G           1   5.0G        2.0G           1   5.0G        2.0G           1   5.0G        2.0G           1   5.0G        2.0G           1   4.0G        2.0G           1   5.0G        2.0G           1   5.0G        2.0G           1   5.0G                 64.0G           4 235.0G                  16.0G           4 200.0G                  12.0G           2 100.0G
infrastructure/hp_infra.txt · Last modified: 2016/09/28 16:32 by Eric Searcy