EzDevInfo.com

hp interview questions

Top hp frequently asked interview questions

How to make my HP server boot faster?

I have an HP ProLiant DL370 G6 server that I am using as a workstation. It takes 60 seconds during reboot and cold boot before screens post with a discrete Radeon HD6xxx GPU. What can I do to make it boot faster?


I have had a chance to use HP Gen8 server. It posts quickly and shows various CPU/memory/QPI initialization steps. Still takes a long time, but at least I can see what's going on.


Source: (StackOverflow)

HP plan to restrict access to ProLiant server firmware - consequences?

I've been a longtime advocate for HP ProLiant servers in my system environments. The platform has been the basis of my infrastructure designs across several industries for the past 12 years.

The main selling points of ProLiant hardware have been long-lasting product lines with predictable component options, easy-to-navigate product specifications (Quickspecs), robust support channels and an aggressive firmware release/update schedule for the duration of a product's lifecycle.

This benefits the use of HP gear in primary and secondary markets. Used and late-model equipment can be given a new life with additional parts or through swapping/upgrading as component costs decline.

One of the unique attributes of HP firmware is the tendency to introduce new functionality along with bugfixes in firmware releases. I've seen Smart Array RAID controllers gain new capabilities, server platforms acquire support for newer operating systems, serious performance issues resolved; all through firmware releases. Reading through a typical changelog history reveals how much testing and effort goes into creating a stable hardware platform. I appreciate that and have purchased accordingly.

Other manufacturers seem to ship product as-is and only focus on correcting bugs in subsequent firmware releases. I rarely run firmware updates on Supermicro and Dell gear. But I deem it irresponsible to deploy HP servers without an initial firmware maintenance pass.


Given this, the early reports of an upcoming policy change by HP regarding server firmware access were alarming...

The official breakdown:

enter image description here

Access to select server firmware updates and SPP for HP ProLiant Servers will require entitlement and will only be available to HP customers with an active contractual support agreement, HP Care Pack service, or warranty linked to their HP Support Center User ID. As always, customers must have a contract or warranty for the specific product being updated.

Essentially, you must have active warranty and support on your servers in order to access firmware downloads (and presumably, the HP Service Pack for ProLiant DVD).

This will impact independent IT technicians, internal IT and customers who are running on older equipment the most, followed by people seeking deals on used HP equipment. I've provided many Server Fault answers that boil down to "updating this component's firmware will solve your problem". The recipients of that advice likely would not have active support and would be ineligible for firmware downloads under this policy.

  • Is this part of a growing trend of vendor lock-in? HP ProLiant Gen8 disk compatibility was a precursor.
  • Is HP overstepping bounds by restricting access to updates that some people have depended upon?
  • Will the result be something like the underground market for Cisco IOS downloads?
  • How does this sit with you, your organization or purchase decision makers? Will it impact future hardware decisions?
  • Is this any incentive to keep more systems under official warranty or extend Care Packs on older equipment?
  • What are other possible ill-effects of this policy change that I may not have accounted for?

Update:
A response on the HP Support Services Blog - Customers for Life

Update:

This is in effect now. I'm seeing the prompt when trying to download BIOS updates for my systems. A login using the HP Passport is not necessary to proceed with the download.

enter image description here


Source: (StackOverflow)

Advertisements

How do servers with redundant power supplies balance consumption?

I have several servers from the HP DL360 line (generations 5-8). Each of these servers has two power supplies installed. The 2 power supplies in each server are fed from different circuits.

My question is will the power draw be roughly balanced between these two circuits, or do the servers consider one power supply to be a "primary" and the other to be a "backup" with a smaller power draw?


Source: (StackOverflow)

Is it possible to influence the way CPUs are enumerated under Linux?

I have a HP DL380 G7 with 2 mismatched CPUs in it. One is a quad core CPU with faster cores, and one is a 6 core CPU with slower cores.

On this box I run an application that due to licensing reasons will only use CPU0-CPU3.

For me it would be desirable for the faster cores on the quad core CPU to enumerate to CPU0-CPU3 in the operating system, giving me a performance bonus for a) using faster clocked cores, and b) keeping all threads on the same physical CPU.

Is there a way to make this happen, either within the BIOS, or in a configuration file or boot option in Linux?

The specific CPU models are:

Intel(R) Xeon(R) CPU E5649 @ 2.53GHz (hex core)

Intel(R) Xeon(R) CPU E5640 @ 2.67GHz (quad core)


Source: (StackOverflow)

HP ProLiant DL360 G7 hangs at "Power and Thermal Calibration" screen

I have a new HP ProLiant DL360 G7 system that is exhibiting a difficult-to-reproduce issue. The server randomly hangs at the "Power and Thermal Calibration in Progress..." screen during the POST process. This typically follows a warm-boot/reboot from the installed operating system.

enter image description here

The system stalls indefinitely at this point. Issuing a reset or cold-start via the ILO 3 power controls makes the system boot normally without incident.

When the system is in this state, the ILO 3 interface is fully accessible and all system health indicators are fine (all green). The server is in a climate-controlled data center with power connections to PDU. Ambient temperature is 64°F/17°C. The system was placed in a 24-hour component testing loop prior to deployment with no failures.

The primary operating system for this server is VMWare ESXi 5. We initially tried 5.0 and later a 5.1 build. Both were deployed via PXE boot and kickstart. In addition, we are testing with baremetal Windows and Red Hat Linux installations.

HP ProLiant systems have a comprehensive set of BIOS options. We've tried the default settings in addition to the Static high-performance profile. I've disabled the boot splash screen and just get a blinking cursor at that point versus the screenshot above. We've also tried some VMWare "best-practices" for BIOS config. We've seen an advisory from HP that seems to outline a similar issue, but did not fix our specific problem.

Suspecting a hardware issue, I had the vendor send an identical system for same-day delivery. The new server was a fully-identical build with the exception of disks. We moved the disks from the old server to the new. We experienced the same random booting issue on the replacement hardware.

I now have both servers running in parallel. The issue hits randomly on warm-boots. Cold boots don't seem to have the problem. I am looking into some of the more esoteric BIOS settings like disabling Turbo Boost or disabling the power calibration function entirely. I could try these, but they should not be necessary.

Any thoughts?

--edit--

System details:

  • DL360 G7 - 2 x X5670 Hex-Core CPU's
  • 96GB of RAM (12 x 8GB Low-Voltage DIMMs)
  • 2 x 146GB 15k SAS Hard Drives
  • 2 x 750W redundant power supplies

All firmware up-to-date as of latest HP Service Pack for ProLiant DVD release.

Calling HP and trawling the interwebz, I've seen mentions of a bad ILO 3 interaction, but this happens with the server on a physical console, too. HP also suggested power source, but this is in a data center rack that successfully powers other production systems.

Is there any chance that this could be a poor interaction between low-voltage DIMMs and the 750W power supplies? This server should be a supported configuration.


Source: (StackOverflow)

Is it necessary to burn-in RAM for server-class hardware?

Considering the fact that many server-class systems are equipped with ECC RAM, is it necessary or useful to burn-in the memory DIMMs prior to their deployment?

I've encountered an environment where all server RAM is placed through a lengthy burn-in/stress-tesing process. This has delayed system deployments on occasion and impacts hardware lead-time.

The server hardware is primarily Supermicro, so the RAM is sourced from a variety of vendors; not directly from the manufacturer like a Dell Poweredge or HP ProLiant.

Is this a useful exercise? In my past experience, I simply used vendor RAM out of the box. Shouldn't the POST memory tests catch DOA memory? I've responded to ECC errors long before a DIMM actually failed, as the ECC thresholds were usually the trigger for warranty placement.

  • Do you burn-in your RAM?
  • If so, what method(s) do you use to perform the tests?
  • Has it identified any problems ahead of deployment?
  • Has the burn-in process resulted in any additional platform stability versus not performing that step?
  • What do you do when adding RAM to an existing running server?

Source: (StackOverflow)

Would upgrading RAM in a HP LaserJet printer improve its performance?

Would RAM upgrade for an HP 4050N LaserJet printer matter? We have this real workhorse of a Laser Printer. It's an HP 4050N and it's been in use for many years. In the past few years, I've noticed that the processing time it takes before it starts printing can take a long time. In some cases, some print queues just process so long we end up killing them and sending it to a different printer on the network.

This HP 4050N printer has a total of 16 MB of RAM. I believe it has 8 MB built-in which I suspect is on the motherboard. There are three slots for RAM. One slot has an 8 MB stick of RAM there. I've looked in the User's Guide and apparently this model can go up to a max of 200 MB of RAM.

I've seen RAM for this printer on sale very cheap in either 64 MB or 128 MB.

My question is, would upgrading the RAM on this printer by bringing the total up to 80 MB or 144 MB have a noticeable improvement on the processing time so that when printing output that contains modern graphics be worth doing? Or is RAM even the issue and it is the processing speed of the printer's CPU that's the actual bottleneck?

Update: The RAM I ordered for $10.00 (128 MB) arrived and I installed it. So the HP4050N went from having a total of 16 MB to 144 MB of RAM. I printed the test print which previously stayed in "processing" forever and never came out, but after this upgrade it printed as normal. This suit our needs. For your situation, as they say, your mileage may vary.


Source: (StackOverflow)

No video output and no status light on HP Proliant Microserver

I have an issue with a "N40L" HP Proliant Microserver. When the server is powered on the power light turns green but the machine's status light (the HP logo on the front) stays off and there is no video output through th onboard VGA. The system fan kicks in and if connected to a network the link light shows activity. I have no extra boards (iLO card, graphics card) only 4GB PNY non-standard stick of RAM.


Source: (StackOverflow)

CentOS 6 doing lots more IO than CentOS 5

I'm benchmarking an application on two identical servers, one is Centos 5.8 and the other is Centos 6.2. My application is running much slower (50% or less) on the Centos 6.2 machine.

In attempting to diagnose the issue I'm tracking CPU, RAM, and IO throughout the benchmark run. I see that the disk reads are significantly higher on the Centos 6.2 box, as measured with iostat.

Both systems are running XFS where my benchmark is running. Both are HP servers with 512MB caching RAID controllers with 8 x 300GB SAS running RAID 10.

Here is the output of xfs_info for each:

centos5

meta-data=/dev/cciss/c0d0p5      isize=256    agcount=32, agsize=8034208 blks
         =                       sectsz=512   attr=0
data     =                       bsize=4096   blocks=257094144, imaxpct=25
         =                       sunit=32     swidth=128 blks, unwritten=1
naming   =version 2              bsize=4096 
log      =internal               bsize=4096   blocks=32768, version=1
         =                       sectsz=512   sunit=0 blks, lazy-count=0
realtime =none                   extsz=4096   blocks=0, rtextents=0

centos6

meta-data=/dev/sda5              isize=256    agcount=4, agsize=57873856 blks
         =                       sectsz=512   attr=2
data     =                       bsize=4096   blocks=231495424, imaxpct=25
         =                       sunit=0      swidth=0 blks
naming   =version 2              bsize=4096   ascii-ci=0
log      =internal               bsize=4096   blocks=113034, version=2
         =                       sectsz=512   sunit=0 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0

Source: (StackOverflow)

Admin mode on Procurve Switches

Not being a network expert, I spent some time configuring my network, until I found my mistake: On my HP ProCurve Switch 1810G, I thought that "Admin mode" means whether the administrative interface can be accessed from this port. Well, it means whether the port is enabled or not. Extract from the help function:

Admin Mode - Select to enable the port-control administration state. Click to enable and have the port participate in the network.(Default : Enabled )

Well, of course I didn't read the help, because I didn't doubt it's for the adminsitrative interface and suspected an error somewhere else.

Anyway, I am wondering if that is a commonly used term for enabling/disabling ports or if HP just wanted to make my life harder? I can't understand why this option isn't just called "Enable port"!?

Here's a screenshot of how it looks in the web interface (yeah, shame on me for using a web interface) enter image description here


Source: (StackOverflow)

SAN with a few SSD drives, or lots of old fashioned HDDs?

My company is trying to figure out what type of SAN to purchase. This is specifically for database servers that are becoming IO constrained (storage is DAS right now, but we're hitting the limit of a single server and we'd like to add clustering as well).

We need a solution that will produce around 3000 IOPS long-term (we currently peak around 1000 IOPS). Most of our database operations are small reads/writes. Based on discussions with HP engineers and others online, an HP P2000 with 24 SAS HD's in a RAID 10 configuration will deliver just short of that speed for ~$20K. Add in controllers and other items to build out the SAN puts us right around our max budget of $30K.

But online, I see that many SAS SSD's deliver speeds of 80,000 IOPS+. Is this realistic to expect? If so, would it be realistic to get a P2000 or similar entry level SAN and throw a few SSD's in there? Our databases are small, only a couple TB total. If we did this, we'd have the money leftover to buy a second SAN for mirroring/failover, which seems prudent.


Source: (StackOverflow)

How can I determine the power demand of an HP server's power supply under load?

I bought an HP DL360 G7 1 U rackmount server. I am looking to put it in a colocation facility. The issue is almost every facility I have seen either limits the amount of power I can access or charge extra for more power over the base amount provided.

Speaking strictly from the server's power supply ratings I believe it said they draw 7 amps a piece and there are 2 of them. But I am under the impression that 7 amps is when the power supplies are running at full tilt.

How can determine what the load will be on the power supplies on average?

Would the server run happily of 2.5amps per power supply? That's what the colo will provide me.


Source: (StackOverflow)

Are consumer class hard disks okay for zfs?

I just recently bought a new server an HP DL380 G6. I replaced the stock smart array p410 controller with an LSI 9211-8i.

My plan is use ZFS as the underlying storage for XEN which will run on the same baremetal.

I have been told that you can use SATA disks with the smart array controllers but because consumer drives lack TLER, CCTL and ERC its not recommended. Is this the case?

I was wondering if using the LSI controller in JBOD (RAID passthrough mode) does the kind of disks I use really have as much of an impact as they would with the smart array controller?

I am aware that trying to use a RAID system not backed by a write cache for virtualization is not good for performance. But I was conisdering adding an SSD for ZFS. Would that make any difference?

I reason I am so obsessed with using ZFS is for dedup and compression. I don't think the smart array controller can do any of those features.


Source: (StackOverflow)

Incredibly low disk performance on HP ProLiant DL385 G7

As a test of the Opteron processor family, I bought a HP DL385 G7 6128 with HP Smart Array P410i Controller - no memory.

The machine has 20GB ram
2x146GB 15k rpm SAS + 2x250GB SATA2, both in Raid 1 configurations.
I run Vmware ESXi 4.1.

Problem:
Even with one virtual machine only, tried Linux 2.6/Windows server 2008/Windows 7, the VMs' feel really sluggish. With windows 7, the vmware converter installation even timed out. Tried both SATA and SAS disks and SATA disks are nearly unsusable, while SAS disks feels extremely slow.

I can't see a lot of disk activity in the infrastructure client, but I haven't been looking for causes or even tried diagnostics because I have a feeling that it's either because of the cheap raid controller - or simply because of the lack of memory for it.

Despite the problems, I continued and installed a virtual machine that serves a key function, so it's not easy to take it down and run diagnostics.
Would very much like to know what you guys have to say of it, is it more likely to be a problem with the controller/disks or is it low performance because of budget components?

Thanks in advance,


Source: (StackOverflow)

Copy Linux host to new hardware

I need to do host to host migrations from old hardware to new hardware. Specifically, from HP BL460G7 to HP BL460G8. Both the old and new servers have 2 x 600GB 2.5" drives and are configured for RAID1. I can afford 30 minutes downtime per server.

There are four servers to migrate, the smallest has a total of 120GB allocated in logical volumes and the largest has 510GB allocated. Three servers are running RHEL5 and one is running RHEL6.

I've been racking my brain on how to do this within the given time frame and without destroying the OS and critical data.

My only thought is this:

  • remove one drive from the old server (server is turned on)
  • remove both drives from the new server (server is turned off)
  • remove G7 drive from caddy and set aside
  • remove G8 drive from caddy and install into G7 caddy
  • install G8 drive in G7 caddy into old server
  • wait for RAID controller to rebuild RAID1 array
  • when finished shutdown old server
  • remove G8 drive in G7 caddy
  • install G8 drive in G8 caddy and insert into G8 (single drive installed)
  • boot G8 server
  • wait for OS to boot
  • when OS has booted insert remaining drive
  • wait for RAID array to rebuild

Does this sound sane?

EDIT: The RHEL5 are RHEL5.10 and the RHEL6 is RHEL6.6

I should have also noted that two of the systems are part of a hot four node cluster that does near constant replication of application "events" (its part of a critical infrastructure system). We have backups but we only use the in the event of total system failure.

Previous testing has shown about a maximum 'dd' between systems of around 50MBps which is far too slow.

EDIT: I was going to rely on kudzu to pickup and deal with the hardware changes.


Source: (StackOverflow)