Skip to main content

Posts

Showing posts from October, 2014

linux - Setting up Windows network on Xen

I'm trying to install a Windows XP server in a Xen environment. The OS is booting fine. Unfortunately I can't figure out how to set up the network settings. Dom0 is a Debian Lenny currently hosting around 10 Linux virtual servers. Windows tells me I have a "limited connection". It can't get any DHCP response, nor access other hosts in the network Here is the Xen's client config file: kernel = '/usr/lib/xen-3.2-1/boot/hvmloader' builder = 'hvm' memory = '1024' device_model='/usr/lib/xen-3.2-1/bin/qemu-dm' acpi=1 apic=1 pae=1 vcpus=1 name = 'winexchange' # Disks disk = [ 'phy:/dev/wnghosts/exchange-disk,ioemu:hda,w', 'file:/mnt/freespace/ISO/DVD1_Installation.iso,ioemu:hdc:cdrom,r' ] # Networking vif = [ 'mac=00:16:3E:0A:D0:1B, type=ioemu, bridge=xenbr0'] # video stdvga=0 serial='pty' ne2000=0 # Behaviour boot='c' sdl=0 # VNC vfb = [ 'type=vnc' ] vnc=1 vncdisp

Active Directory / DNS naming design, multiple sites, single domain,

The network I currently manage will shortly be expanding to cover two sites, and due to the organisation of the company, I have already determined that a site-to-site hardware VPN will be implemented to link the two sites. The WAN link between the two will be between 20-100mb so no issues with bandwidth for AD/DFS etc. replication. I will also likely be looking at installing a single Active Directory domain across both HQ and branch office, as I understand that modern day Microsoft best practices recommend steering away from multiple domains in a single forest except in exceptional circumstances. My question is this... If I maintain one domain across both sites (say company.com ), how can I maintain a logical DNS separation between the two sites, (say for arguments sake dc1.london.company.com and dc1.birmingham.company.com )? Can this be done by structuring DNS in a certain way, without having to have a london.company.com and birmingham.company.com AD domain, one for each site? Thank

subnet - Trouble subnetting a network

I have the network address 160.80.0.0/24 and have to subnet things like this: How would I subnet this network to accomodate such a huge number of hosts? :D Any guidance for a newbie? I've already read the How Does Subnetting Work post, but they use small number examples, not big like this. I have to use CIDR and VLSM. Thanks guys! :D Answer A /24 subnet only holds 254 host addresses. If you have 16.000 hosts, you need a larger subnet. If you really only have 160.80.0.0/24 then you are asking the impossible. A /18 will hold 16.383 hosts, which should be enough. Or, if you have two distinct subnets of 8.000 hosts each, a couple of /19 subnets would be better. Assuming the whole /16 network at Universita' di Roma is at your disposal, you might do this: subnet Broadcast Netmask #Hosts 160.80.0.0/17 160.80.127.255 255.255.128.0 32767 160.80.128.0/18 160.80.191.255 255.255.192.0 16383 160.80.192.0/19 160.80.223.255 255.255.224.0 8191

amazon route53 - How to delegate DNS for sub-domain to domain with separate hosted zones

TL;DR : When the primary hosted zone contains two NS records, containing the name servers for the primary hosted zone (example.com) and the subdomain hosted zone (sub.example.com) would that be sufficient to get sub.domain.com resolved going over the nameserver of example.com? Domains are currently managed by name.com but DNS should be managed by AWS Route 53 in order to automate creation of new sub-domains and set up records dynamically. The domain ownership should stay with name.com which means a custom name server needs to be configured for the domain example.com Subdomains like sub.example.com should be resolved through the name server of example.com as well, otherwise adding a new subdomain would require to configure a custom name server on name.com. In the current setup, each domain and sub-domain is in its own hosted zone. example.com has a NS record and sub.example.com has one, too. Now to delegate I have added another NS record in example.com for sub.example.com contai

hp - Raid 5 on two scsi buses with different maximum speeds

I'm quite a noob, I'm trying to set up as a study machine an old HP ProLiant DL385 G1. I've configured a RAID 5 out of six SCSI 3.5" HDDs using: - Smart Array 6i controller; - (4 x SCSI Ultra320 36,4GB disks, 15K) AND (2 x SCSI Ultra3 36,4GB disks, 10K), all on the same local SCSI bus. Now the disks should run on the SCSI bus at most @ 10K, 160MB/s, as imposed by the slowest Ultra3. If I put the two groups of disks on two different SCSI local buses, would the overall performances of the array be improved? The two-buses configuration requires an optional terminator board... Answer Don't bother. The split-bus/split-channel (aka "Duplex Mode*") design was only helpful in separating OS and data drive activity and offering an opportunity to use multiple Smart Array RAID controllers. It would break your 6 bays into 2 + 4 bays. Just use the system as-is, though. The stakes are low, and this is very old equipment/technology.

domain name system - DNS referral / delegation: which DNS is responsible; How to delegate the right way?

I bought the domain earechnung.at with Hetzner and am using my webspace at All-Inkl . I want to use the nameservers of my webhost (All-Inkl). As I registered the domain with Hetzner, nic.at (the austrian domain registry) lists the following nameservers (all the ones of Hetzner): Nameserver (Hostname) 1: ns.second-ns.com Nameserver (Hostname) 2: ns1.your-server.de Nameserver (Hostname) 3: ns3.second-ns.de Zonefile at Hetzner The zonefile at Hetzner now looks like the following: $TTL 7200 @ IN SOA ns5.kasserver.com. office.earechnung.at. ( 2014030300 ; serial 14400 ; refresh 1800 ; retry 604800 ; expire 86400 ) ; minimum @ IN NS ns6.kasserver.com. @ IN NS ns5.kasserver.com. @ IN A 85.13.135.165 mail IN A 85.13.135.165 www IN A 85.13.135.165 w3 IN A 85.13.135.165 ftp

networking - DNS: Can master server be non-authoritative?

We have about 10 domains in 10 different countries and we want to setup some centralized DNS management. Basically we have this design in mind (all servers are RHEL, with bind as DNS): Have 1 master DNS server hidden, not accessible from internet, which contains zone files for all these domains, so that we can change everything on 1 place. Have slave DNS servers in each country to which these zones are respectively replicated to from master server. The weird part of this design as I see it, is that only slave servers would be in DMZ and accessible from internet, and only them would be authoritative, having NS record for each such domain. Does it make any sense? Is it even possible to have a master server for a domain that isn't considered authoritative as it doesn't itself have NS record? (there is no point in having NS record for server that's not visible from internet I guess). Answer From my experience, there is nothing wrong with your design, except for th

how can i move ext3 partition to the beginning of drive without losing data?

I have a 500GB external drive. It had two partitions, each around 250GB. I removed the first partition. I'd like to move the 2nd to the left, so it consumes 100% of the drive. How can this be accomplished without any GUI tools (CLI only)? fdisk Disk /dev/sdd: 500.1 GB, 500107862016 bytes 255 heads, 63 sectors/track, 60801 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Disk identifier: 0xc80b1f3d Device Boot Start End Blocks Id System /dev/sdd2 29374 60801 252445410 83 Linux parted Model: ST350032 0AS (scsi) Disk /dev/sdd: 500GB Sector size (logical/physical): 512B/512B Partition Table: msdos Number Start End Size Type File system Flags 2 242GB 500GB 259GB primary ext3 type=83 dumpe2fs Filesystem volume name: extstar Last mounted on: Filesystem UUID: f0b1d2bc-08b8-4f6e-b1c6-c529024a777d Filesystem magic number: 0xEF53

unicorn - Nginx: redirect http to https

I know the question has been asked countless times. Still cant get mine to work with the answers i've seen so far. I'm trying to force redirect http to https with nginx. When i visit https//subdomain.example.com, everything works just well, but visiting http://subdomain.example.com gives me "This Webpage has a redirect loop" I've tried putting rewrite ^(.*) https://$host$1 permanent; and return 301 https://www.mydomain.com$request_uri; Tried proxy_set_header X-Forwarded-Proto $scheme; didnt solve the issue. Please how can i solve this endless loop issue? This is my nginx.conf upstream unicorn { server unix:/tmp/unicorn.example.sock fail_timeout=0; } server { server_name subdomain.example.com; listen 80; return 301 https://$host$request_uri; root /home/deploy/apps/example/current/public; location ^~ /assets/ { gzip_static on; expires max; add_header Cache-Control public; proxy_set_header X-Forwarded-Proto $scheme; } try_files $uri/index.html $uri @unicorn

domain name system - Route 53 as backup to Cloudflare DNS

I use cloudflare as my main DNS provider right now but am looking to add route 53 as another provider/backup in case one or the other goes down. (DYN did this last year and it caused a headache with just one provider) It is my understanding that I need to add the NS records to each domain and then update the name servers for each domain with my domain register. Is this the best way to ensure uptime? Are there any issues with this? Answer I highly recommend reading the two links that EEAA commented , along with a third: Essentially you have the right idea. Specify your additional nameservers as NS records in both of your zones (Route 53 and Cloudflare), and then add those new nameservers to your registrars configuration for your domain. There will be zero downtime if you do this correctly. The way to ensure you do this correctly is to use a tool like Stack Exchange's DNS Control to make the changes for you. Get it working with your current single DNS provider, and then

Trouble setting up RAID 5 with three drives

I'm using a hardware HP RAID controller. My server came with one 250GB drive. I purchased three additional 250GB drives with the intent of having one as a cold-spare. I put the two additional drives in the server and powered it up. I went into the BIOS, enabled the RAID controller, then rebooted. It brought me to the HP raid controller setup. I went in and deleted the three pre-existing arrays (which were just three drives operating independently of each other). I then selected the array configuration option, selected all three drives together, and told it to initialize. It did its thing, then I went into "create array" and again selected all three drives. It presents me with a dialog asking what kind of RAID that I want to set up, but the problem is that it doesn't present RAID 5 as an option. All I can choose is 0, 1, and 10. I know RAID 5 is possible with 3 drives, so what gives? I've never set up a RAID array before so if there is something obvious I'm mis

SSD (Raid 1) vs SAS (Raid 10) Sql Server Hardware Recommendation?

Our current SQL Server Machine (which is about 6 years old): Box: Dell 2900 CPU: Xeon 5160 Dual Core RAM: 4GB HDD: 6x 15k RPM SAS drives in raid 10 Since it's 6 years old, the drives have been spinning for 6 years straight which is making my employer nervous about the life of the drives. We are considering buying, or upgrading our current server. Does my employer have a rational fear, or should the drives last another few years? (they aren't really easy to find anymore, but we do have a hot spare drive inside the computer on standby, and a hot spare server with the same drives in it) The idea is to either get another 6 SAS drives to run in RAID 10 or to consider getting two SSD (SLC) drives in raid 1. Aside from cost, is there any reason to opt one way or another? Is it worth upgrading the server in order to get a new CPU and RAM? Our SQL server's CPU generally doesn't peak over 10%. It runs a medium traffic website and internal business apps, but nothing crazy in terms

performance - Different linux page cache behavior for servers doing the same work

I have two sets of servers w/128G memory, distinguished by when they were provisioned, that are behaving very differently while running the exact same daemon (elasticsearch). I am using elasticsearch for full text search, not log storage, so this baically a read heavy operation with minimal writes (less than 1MB/s). This daemon mmap's the full dataset of ~350GB into its virtual memory and then accesses certain portions of it to serve requests. These servers have no swap space configured. The problem is one set of servers behaves well, issues ~50 major faults per second and need on average 10MB/s of disk io to satisfy that demand. The poorly performing servers see 500 major faults per second and need on average ~200MB/s of disk io to satisfy that. The increase in disk io leads to poor p95 response latencies and occasional overloads as it hits the disk limits of ~550MB/s. They all sit behind the same load balancer, and are part of the same cluster. I could see if one server was behav

innodb - MySQL Server - Got error -1 from storage engine

I am currently trying to restore a MySQL table from the .ibd file. I have been following the instructions on the MySQL reference manual on how to use DISCARD and IMPORT TABLESPACE to replace the .idb files. Discarding the tablespace returns no error and the file is deleted however IMPORTING the replacement .ibd file yields a "Got error -1 from storage engine" error. There doesn't seem to be too much information about what exactly an error -1 is. Does anybody have any further insight as to why an import table space isn't working?

licensing - Can software licenses be resold?

As there are some license that I have finished using (such as Adobe Photoshop, Microsoft Office, 3D Studio, Autodesk AutoCad). It also includes some server software (Server 2003, User Cal, Endpoint protection server, Endpoint virus definition renew...) If my company finished using them, can I resell them to get back some money? They are all legally purchased software. Thanks and looking forward to the answer, Dan.

mod rewrite - apache 2.2 mod_rewrite redirect some urls to https and force http for others

I have my page at a shared host. In general the site is reachable using http and https. The page serves ads and most of the ads are not available using https therefore I only want the admin parts to be reachable via https. What I want is: http://myurl.com/auth -> https://myurl.com/auth http://myurl.com/admin -> https://myurl.com/admin https://myurl.com/allotherstuff -> http://myurl.com/allotherstuf Basically all requests to auth and admit should be redirected to use ssl. Requests to other sites should be redirected from https to http (if the page is requested using https). I tried the following .htaccess file, which makes "auth" and "admin" unreachable (too many redirects). RewriteEngine On #Redirect other sites to http RewriteCond %{HTTPS} =on RewriteCond %{REQUEST_URI} !^/?auth.*$ RewriteCond %{REQUEST_URI} !^/?admin.*$ RewriteRule ^/?(.*) http://%{SERVER_NAME}/$1 [R,L] #Redirect auth and admin to https RewriteCond %{HTTPS} !=o

security - How to configure SSL with apache2/httpd mass virtual hosting using mod_vhost_alias

I have been searching quite a bit now but couldn't find any answers. I'm using httpd 2.2.15 and Centos 6.2. I have configured apache for mass virtual hosting. I.e.: UseCanonicalName off VirtualDocumentRoot /var/www/html/%0 I will have the same "main" domain with different subdomains pointing to the virtual hosts. I have created a self-signed cert for testing purpose with common name *.mydomain.com. There's one IP for the entire server. How can I configure apache to use ssl for my vhosts? And if possible, added to above I would like to achieve this as well: Can I define a directory, or preferable some files (e.g. login page), that should be excluded from the ssl? All vhosts are basically different instances of the same application (except the ones I mention in 2 below). Can I define some vhosts that should not use ssl (I have full control of the subdomain name for those). This will be two application, my home-page (www) and some administrative application. If it

svn - Nginx subversion commit failure

Hi I have trouble in committing php scripts to subverison. I am using Nginx web server to send request to apache server using mod_proxy setting of nginx for commiting files, svn checkout and updates works fine. server { listen 80; server_name svn.server; location / { access_log off; proxy_pass http://localhost:8081; proxy_set_header X-Real-IP $remote_addr; proxy_set_header Host $host; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; } location ~ ^/repos/.*.php$ { fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; include fastcgi_params; } }

domain name system - Performance penalty when using CNAME

We are using the current CNAME record for a server, i.e. foo.example.com => CNAME => server1.example.com server1.example.com => CNAME => ec2-34-142-138-31.compute-1.amazonaws.com ec2-34-142-138-31.compute-1.amazonaws.com => A => 34.142.138.31 Is this configuration common? Is the performance penalty when using 2 CNAME records critical? To minimize the impact of CNAME lookup, should I set a larger TTL for the 1st CNAME, but shorter CNAME for the 2nd CNAME? i.e. foo.example.com => CNAME (TTL=86400) => server1.example.com server1.example.com => CNAME (TTL=300) => ec2-34-142-138-31.compute-1.amazonaws.com

opensolaris - ZFS: Mirror vs. RAID-Z

I'm planning on building a file server using OpenSolaris and ZFS that will provide two primary services - be an iSCSI target for XenServer virtual machines & be a general home file server. The hardware I'm looking at includes 2x 4-port SATA controllers, 2x small boot drives (one on each controller), and 4x big drives for storage. This allows one free port per controller for upgrading the array down the road. Where I'm a little confused is how to setup the storage drives. For performance, mirroring appears to be king. I'm having a hard time seeing what the benefit would be of using RAIDZ over mirroring would be. With this setup I can see two options - two mirrored pools in one stripe, or RAIDZ2. Both should protect against 2 drive failures, and/or one controller failure...the only benefit of RAIDZ2 would be that any 2 drives could fail. The storage should be 50% of capacity in both cases, but the first should have much better performance, right? The other thing I

hp proliant - Is there a complete reference for HP's diagnostic port 84 and/or port 85 codes?

The HP ProLiant DL580 systems (G5 is the one I'm interested in at the moment) have a two-digit 7-segment display on the system board (visible only inside the chassis) which displays various diagnostic codes which they refer to as "port 84" or "port 85", switchable via a DIP switch. The manuals mention this display but give only sketchy information about the actual codes displayed. Is there anywhere I could find a more specific & complete reference of the display codes? (I've done some web searches but have so far found only unrelated results.)

DNS Records: Forwarding a port on my public domain

I've set up an MX record for a local mailserver before, but I've never done this... I want to set up my public domain (registered by Dreamhost) to accept IPP (Internet Printing Protocol) jobs and send them to the IPP LaserJet printer on my home LAN. IPP uses port 631. What do I put in my DNS records? Answer DNS doesn't do anything with ports, it's strictly for mapping names to IP addresses. What you need to configure is your firewall to accept connections to port 631 on your public IP and to forward those to the (presumably) internal IP address of the printer.

apache 2.2 - Best Practices for Webserver Benchmarking

I have a webserver that I wish to benchmark before I make some optimizations to it to see if they have any effect. However, I want to know what are the best practices for benchmarking? For example, a co-worker told me to benchmark the machine with another machine on a local network to eliminate network traffic problems. However, I was thinking of using an off-site machine to benchmark because I wanted to see if the optimizations would make any difference in a real-world case. I was arguing that many of the speed tweaks deal with optimizing network connection. For example, in Apache, the KeepAlive value allows browsers to use a single TCP connection to request multiple objects, instead of opening and closing a connection for each resource. If the test was done on a local network connection, then that tweak wouldn't really make that much of a difference, right? The same with minimizing js/css and removing whitespace/comments from HTML. On the other hand, I do see the problem with int

linux - How much memory would be needed for a Wordpress site with 8.000 hits per day with the occasional spike?

I'm running a Wordpress site with about 8.000 hits per day, which occasionally spikes to 12.000 hits. I'm currently considering a VPS server. My setup is WP with Apache and MySQL. Can anyone recommend how much memory would be sufficient in this situation? Answer No one can completely answer your question without knowing details about your install, such as what plugins you are running and the memory requirements of each plugin. Wordpress does not scale well without some sort of Caching plugin. W3 Total Cache and WP Super Cache are both great plugins that will let your site scale much beyond a default Wordpress install. These plugins essentially minimize the dynamic PHP compilation and SQL hits and instead serve HTML files. It then becomes a calculation much like any other busy web server. How many images are coming off of your server? Is output gzipped? Are you planning for future growth? Many factors go into how fast the website will serve up and how muc

apache 2.2 - Apache2 randomly stop working, error 403

I just installed a personal Ubuntu Server where I'm working, to test our php websites. This is a 12.04 LTS up to date with LAMP and Samba installed. I set it up to work with the /home/administrateur/www directory as DocumentRoot default directory. I did so : Added www-data user in administrateur group, recursively gave ug+rwx permissions on admin/. I changed the default root to /home/administrateur/www in /etc/apache2/sites-available/default As far, everything's ok... but apache restarts averyday, one or more times, and then, I can't access websites and getting 403 error. The www/ folder, which is usually available via our local network, is no more accessible. But as I connect to the server with putty, everything's doing fine again. This is really weird. My error log looks like this for this moring : PHP Deprecated: Comments starting with '#' are deprecated in /etc/php5/apache2/conf.d/ming.ini on line 1 in Unknown on line 0 [Sun Mar 02 06:51:47 2014] [notice] A

domain name system - Active Directory: is it good to have 127.0.0.1 as DNS server entry on 2 machines that are 2 DNS servers in a local net?

This is a small local net with 10 computers in an office, they are running ms server 2003 active directory with 2 domain controllers. Each of the 2 DCs is also a DNS server and they syncronize. There are no more than these 2 DCs and 8 client machines in the whole domain. In the config of the only network card of each computer it is like this: The 8 client machines have an entry of DC1 as 1st and DC2 as 2nd DNS server. DC1 and DC2 both have 127.0.0.1 as DNS server entry. Is there anything bad about this entry 127.0.0.1? I thought this is simple and clean and microsoft standard? Would it be better to put the own lan address (192.168.0.11 on DC1 and 192.168.0.12 on DC2) and not the loopback address there? The DNS servers themselves (on DC1 and DC2) do their external lookups on the dsl-router. Answer I think that is perfectly fine. It wouldn't be a good idea to put their LAN IP because if that ever changes, the DNS resolution may be broken. 127.0.0.1 will never change and

nameserver - How can I prevent someone else's name server registration from pointing to my IP addresses? (i.e. change com zone file)

I got a new IP address block from my ISP; lets call it 2.2.2.0/25. 2.2.2.1 and 2.2.2.2 get frequent DNS traffic; looking at the traffic; it's destined for the nameservers ns1.tp.com and ns2.tp.com. tp.com has ns1.tp.com and ns2.tp.com as their nameservers; so no content is available on that site. ns1.tp.com and ns2.tp.com are registered with the zone authority for .com (is that ICANN?) --- how do I go about notifying the proper authority that those IP addresses are mine now and that this nameserver entry they've got is stale? EDIT : ns1.tp.com are A records; but they are name servers registered with whoever manages the .com zone. tp.com's nameservers are ns1.tp.com; which are my servers; so it's definitely not still being served up as an A record.

Nginx: how to redirect all request from https://domainA.com to https://domainB.com? (and not only http ones)

I have several domain names, and I want redirect all of them to https://indi-ticket.fr I have configured Nginx and all the request from http ://other-domain.xx are redirected to htpps://indi-ticket.fr but not these from https ://other-domain.xx. Here is the relevant part of my Nginx conf: server { listen 80; server_name www.indi-ticket.fr indi-ticket.fr www.indi-tickets.fr indi-tickets.fr www.indi-ticket.com indi-ticket.com; return 301 https://indi-ticket.fr$request_uri; } server { listen 443 ssl; server_name indi-ticket.fr; ssl on; ... What is wrong with this conf? What curl -I -L https://indi-tickets.fr does not redirect to https://indi-ticket.fr ? Answer https://indi-tickets.fr is using HTTPS, which means it's hitting port 443, not port 80. Your port 443 config doesn't do any redirecting. It's possible to have a

linux - How to bind a non-local IPv6 address?

Is there an equivalent of net.ipv4.ip_nonlocal_bind for ipv6 ? Need to start my nginx on boot on such an IP... My Ubuntu doesn't have this IPv6 assigned on eth0 quickly enough despite this /etc/network/interface : iface eth0 inet6 static address 1:2:3:4::5 netmask 64 During boot : Starting nginx: the configuration file /etc/nginx/nginx.conf syntax is ok [emerg]: bind() to [1:2:3:4::5]:80 failed (99: Cannot assign requested address) I need to run /etc/init.d/nginx restart a few seconds after boot to make things work :-/ NB : 1:2:3:4::5 is used here only for demo, i have a valid IPv6 address on my server.

domain name system - Setting up primary and secondary dns on one server

I'm configuring a vps to be used as webserver, and everything is going well, but now I am coming to the point of DNS. The VPS has two working IP Addresses assigned to it. Now I need to get a primary DNS running on one, and a secondary DNS on the other IP Address (I know it is better and safer to run it on two different machines, but that I don't have). I can find loads and loads of articles on the iNet on how to configure one DNS on one machine, or two on two different machines, but nothing about how to configure two dns'es on one single machine. Anyone who can help me? I'm using Centos 6.3 Answer You can do what you want easily: Configure your DNS server to listen on both IP addresses. The internet has no way of knowing whether it's one server or two - it's just happy you have two IP addresses serving your zone's information. If you do this however you are intentionally disregarding the entire purpose of requiring two DNS servers. Th

hp proliant - Why doesn't the Internal USB port boot ESXi 6 on my Microserver Gen8?

I'm trying to run ESXi 6 off the Internal USB 2.0 port on a HP Microserver Gen8. No matter what I try it will not make any attempt to boot the internal USB to load the hypervisor. I installed ESXi 6 with the customised ProLiant HP image (Jan 2016). The steps I took to do this was burn the ISO to a CD and installed it to a 8 GB Transcend while it was plugged into the internal USB slot of the Microserver Gen8. I pulled all SATA drives before installing. The ESXi setup detected the USB drive no problem and installed without an issues. Upon rebooting it doesn't boot from it at all. I checked my BIOS and all the USB related options appear to be correct: USB Enabled - Enabled USB Boot Support - Enabled Main Boot order - USB DriveKey is set to first priority Internal drives boot priority - USB DriveKey first USB Enumeration - Enabled To confirm the USB stick is working I pulled it out of the Microserver Gen8 and booted it on a laptop, which booted no problem. I had the same problem wi

apache 2.2 - SSL with multiple servers and domains

Our current setup utilizes multiple VPS's along with multiple domains. ex (And yes I know these IPs are all fake and unusable in reality. All for example..) alpha.domain.com 66.555.555 beta.domain.com 66.555.554 charlie.domain.com 66.555.555 delta.domain.com 66.555.557 Let's assume the first 3 domains require SSL (https) There's two challenges I have here. One is multiple domains, the other is multiple IP/servers. (Currently each is on their own server but in theory we could stack multiple IPs onto a single server also..either way same issue I believe..) What is the best method of certificate for this? if alpha was our primary ecommerce site lets say I would think it should have its own unqiue SSL. But the others are secondary systems which primarily run crons and backend scripts that require Https for interaction. Is it possible to share one cert among multiple domains/servers/ips or is it that we should get a cert for each domain or each ip/server? Answer Yo

linux - Awstats - outputting stats for merged Access_logs only producing stats for one server's log

I've been attempting this for two weeks and I've accessed countless number of sites on this issue and it seems there is something I'm not getting here and I'm at a lost. I manged to figure out how to merge logs from two servers together. (Taking care to only merge the matching domains together) The logs from the first server span from 15 Dec 2012 to 8 April 2014 The logs from the second server span from 2 Mar 2014 to 9 April 2014 I was able to successfully merge them using the logresolvemerge.pl script simply enermerating each log and > out_putting_it_to_file Looking at the two logs from each server the format seems exactly the same. The problem I'm having is producing the stats page for the logs. The command I've boiled it down to is /usr/share/awstats/tools/awstats_buildstaticpages.pl -configdir=/home/User/Documents/conf/ -config=example.com awstatsprog=/usr/share/awstats/wwwroot/cgi-bin/awstats.pl dir=/home/User/Documents/parced -month=all -year=all

Running a cron job manually and immediately

(I have already read How can I test a new cron script ? .) I have a specific problem (cron job doesn't appear to run, or run properly), but the issue is general: I'd like to debug scripts that are cronned. I am aware that I can set up a * * * * * crontab line, but that is not a fully satisfactory solution. I would like to be able to run a cron job from the command line as if cron were running it (same user, same environment variables, etc.). Is there a way to do this? Having to wait 60 seconds to test script changes is not practical. Answer Here's what I did, and it seems to work in this situation. At least, it shows me an error, whereas running from the command line as the user doesn't show the error. Step 1 : I put this line temporarily in the user's crontab: * * * * * /usr/bin/env > /home/username/tmp/cron-env then took it out once the file was written. Step 2 : Made myself a little run-as-cron bash script containing: #!/bin/bash /usr/bin/

hacking - Reinstall after a Root Compromise?

After reading this question on a server compromise , I started to wonder why people continue to seem to believe that they can recover a compromised system using detection/cleanup tools, or by just fixing the hole that was used to compromise the system. Given all the various root kit technologies and other things a hacker can do most experts suggest you should reinstall the operating system . I am hoping to get a better idea why more people don't just take off and nuke the system from orbit. Here are a couple points, that I would like to see addressed. Are there conditions where a format/reinstall would not clean the system? Under what types conditions do you think a system can be cleaned, and when must you do a full reinstall? What reasoning do you have against doing a full reinstall? If you choose not to reinstall, then what method do you use to be reasonably confident you have cleaned and prevented any further damage from happening again. Answer A security decision

monitoring - What tool do you use to monitor your clients?

Well, we had " What tool do you use to monitor your servers? ", and I wondered, do you (and should you) monitor your clients (desktops and laptops)? What tools are useful for this? It seems to me that one should monitor the clients -- to guage how well they are performing, perhaps to keep an eye on battery life and power usage, perhaps watching hard drive, network, CPU and maybe even GPU usage, and, indeed, to see if lab users avoid a certain machine or if it never shows up on the network. Please state which platform(s) a given tool works with, and the licence or cost, if it is easily determined.