Skip to main content

Posts

Showing posts from November, 2019

linux - crontab schedules and cron jobs

itemprop="text"> I've put two files in the /etc/cron.d/ directory: The first makes a new post everyday at 12:00AM: 0 0 * * * php /var/www/site1/helper post:make The second updates the latest post every 10 minutes 10 * * * * php /var/www/site1/helper post:update Do I have to do something else for this job to run based on the time (eg. every 10 minutes) or do I have to do crontab job1 and crontab job2 ? EDIT : I also installed cronie. itemprop="text"> class="normal">Answer Putting files in cron.d is enough. However, your last entry should be: */10 * * * * php /var/www/site1/helper post:update Otherwise it runs once an hour, at the 10th minute.

linux - crontab schedules and cron jobs

I've put two files in the /etc/cron.d/ directory: The first makes a new post everyday at 12:00AM: 0 0 * * * php /var/www/site1/helper post:make The second updates the latest post every 10 minutes 10 * * * * php /var/www/site1/helper post:update Do I have to do something else for this job to run based on the time (eg. every 10 minutes) or do I have to do crontab job1 and crontab job2 ? EDIT : I also installed cronie. Answer Putting files in cron.d is enough. However, your last entry should be: */10 * * * * php /var/www/site1/helper post:update Otherwise it runs once an hour, at the 10th minute.

hp - ML350 G6 drive not detected

I have an ML350 G6 that I have bought some new drives (HP P/N: 507129-004) for. According to the drive specs these are compatible with the G6 but the SAS controller (p410i) is not detecting them. The controller works fine as I have another set of disks that I have xcp-ng installed on working perfectly on the server. Seeing as the raid controller is fine, server works perfectly with other disks and assuming the 507129-004 disks are not faulty (I have 15 and they all act the same) what else could prohibit the p410i sas raid controller from detecting the drives? Any guidance appreciated.

hp - ML350 G6 drive not detected

I have an ML350 G6 that I have bought some new drives (HP P/N: 507129-004) for. According to the drive specs these are compatible with the G6 but the SAS controller (p410i) is not detecting them. The controller works fine as I have another set of disks that I have xcp-ng installed on working perfectly on the server. Seeing as the raid controller is fine, server works perfectly with other disks and assuming the 507129-004 disks are not faulty (I have 15 and they all act the same) what else could prohibit the p410i sas raid controller from detecting the drives? Any guidance appreciated.

How could I add NAT with private subnet to this Cisco 2500 config?

itemprop="text"> There's this Cisco 2500 with one V.35 port going to an HDSL DCE, and the ethernet port to a Cisco PIX; there are also other IPs (subnet 80.something) in the configuration on the serial side that are used to route traffic to the DSL provider. I'm wondering if it is possible to remove the PIX and move everything on the Cisco 2500 - I can usually manage to do this when the public IP is on the wan-facing port and the private IP on the lan one, but I don't know how could I add a private IP to the 2500 and use the public one for nat when they're both on the same internal interface. The current config for the 2500 looks like: ip subnet-zero ! interface Ethernet0 description connected to PIX ip address 217.x.x.1 255.255.255.248 ip nat inside no ip directed-broadcast no ip mroute

How could I add NAT with private subnet to this Cisco 2500 config?

There's this Cisco 2500 with one V.35 port going to an HDSL DCE, and the ethernet port to a Cisco PIX; there are also other IPs (subnet 80.something) in the configuration on the serial side that are used to route traffic to the DSL provider. I'm wondering if it is possible to remove the PIX and move everything on the Cisco 2500 - I can usually manage to do this when the public IP is on the wan-facing port and the private IP on the lan one, but I don't know how could I add a private IP to the 2500 and use the public one for nat when they're both on the same internal interface. The current config for the 2500 looks like: ip subnet-zero ! interface Ethernet0 description connected to PIX ip address 217.x.x.1 255.255.255.248 ip nat inside no ip directed-broadcast no ip mroute-cache no cdp enable ! interface Serial0 no ip address no ip directed-broadcast no ip mroute-cache encapsulation frame-relay bandwidth 1024 no cdp enable ! interface Serial0.1 point-to-point

storage - How to upgrade the firmware of HP SAS expander card without Smart Array controller or Proliant Server?

itemprop="text"> How can I update/upgrade/flash the firmware of an HP SAS expander card [468406-B21 href="http://partsurfer.hp.com/Search.aspx?SearchText=468406-B21" rel="nofollow noreferrer">a.k.a. href="http://h20566.www2.hp.com/portal/site/hpsc/template.PAGE/public/kb/docDisplay?javax.portlet.begCacheTok=com.vignette.cachetoken&javax.portlet.endCacheTok=com.vignette.cachetoken&javax.portlet.prp_ba847bafb2a2d782fcbb0710b053ce01=wsrp-navigationalState%3DdocId%253Demr_na-c01733557-4%257CdocLocale%253D%257CcalledBy%253D&javax.portlet.tpst=ba847bafb2a2d782fcbb0710b053ce01&ac.admitted=1412070310829.876444892.199480143" rel="nofollow noreferrer">487738-001] ? I used to do this using Windows and a HP P410 Smart Array controller, however that controller is no longer available. The online ROM flash c

storage - How to upgrade the firmware of HP SAS expander card without Smart Array controller or Proliant Server?

How can I update/upgrade/flash the firmware of an HP SAS expander card [468406-B21 a.k.a. 487738-001] ? I used to do this using Windows and a HP P410 Smart Array controller, however that controller is no longer available. The online ROM flash component is not an option because I don't own HP Smart Array controller. Neither is the HP Service Pack for ProLiant + USB key/stick an option because that requires a ProLiant server. Answer Upgrading the HP SAS expander is possible using Linux and a SAS HBA . Note: Flashing firmware to a SAS expander will likely not work when the expander is connected to a SAS RAID controller because that controller might hide all devices behind it from the OS. An example of a SAS HBA is Supermicro SAS2LP-MV8 . In case you haven't got Linux, you can use a Linux Live CD. You could try the most recent Ubuntu Live CD . A 32-bit download will do. 64-Bit will also work. 1. Prerequisites Start a Linux terminal That is Ctrl + Alt + T using th

redirect wildcard subdomains to https (nginx)

I've got a wildcard ssl certification and I'm trying to redirect all non-ssl traffic to ssl. Currently I'm using the following for redirection the non-subdomainded url which is working fine. server { listen 80; server_name mydomain.com; #Rewrite all nonssl requests to ssl. rewrite ^ https://$server_name$request_uri? permanent; } when I do the same thing for *.mydomain.com it logically redirects to https://%2A.mydomain.com/ How do you redirect all subdomains to their https equivalent?

redirect wildcard subdomains to https (nginx)

I've got a wildcard ssl certification and I'm trying to redirect all non-ssl traffic to ssl. Currently I'm using the following for redirection the non-subdomainded url which is working fine. server { listen 80; server_name mydomain.com; #Rewrite all nonssl requests to ssl. rewrite ^ https://$server_name$request_uri? permanent; } when I do the same thing for *.mydomain.com it logically redirects to https://%2A.mydomain.com/ How do you redirect all subdomains to their https equivalent?

apache 2.2 - Block traffic behind AWS ELB

My web servers are behind ELB, I want to block traffic from some specific user agent which is a DDOS attack. Apache always see ip address of ELB as an end user so I tried below attempts: Blocking IP address at ELB level is not possible because it has limit of 20 IP addresses and IP addresses change at every attack. Block access using rewrite condition, this works but if lot of hits come then server load goes beyond 100 and all apache threads become busy in serving tons of 403 so site appears down for legitimate requests. RewriteCond %{HTTP_USER_AGENT} ^SomeThing RewriteRule ^(.*)$ - [F] Block with mod_sec does same thing of serving 403 which create same effect as #2 above. Block packets with iptables string module: Block packets which have specific user agent. In this scenario iptables sends DROP/ REJECT to a

apache 2.2 - Block traffic behind AWS ELB

My web servers are behind ELB, I want to block traffic from some specific user agent which is a DDOS attack. Apache always see ip address of ELB as an end user so I tried below attempts: Blocking IP address at ELB level is not possible because it has limit of 20 IP addresses and IP addresses change at every attack. Block access using rewrite condition, this works but if lot of hits come then server load goes beyond 100 and all apache threads become busy in serving tons of 403 so site appears down for legitimate requests. RewriteCond %{HTTP_USER_AGENT} ^SomeThing RewriteRule ^(.*)$ - [F] Block with mod_sec does same thing of serving 403 which create same effect as #2 above. Block packets with iptables string module: Block packets which have specific user agent. In this scenario iptables sends DROP/ REJECT to attacker, apache doesn't get signal that the connections is now dead and waits for a timeout which cause all apache threads in use for timeout, so this method is not useful here

ubuntu - RAID 1+0 vs RAID 0+1

itemprop="text"> I went with some advice I was given from someone I know to go with a RAID setup for this server I ordered. The specs are below. I plan on using this server to host multiple sites in a PHP/MySQL environment and an SVN repository in Ubuntu Server. I'd like to have a setup where the primary drive is mirrored so that in the event of failure on a drive the server could just use the other pair of drives. I'm reading on wikipedia about raid setups and I see RAID 0-5, but don't see a 10 listed on wikipedia. Perhaps I'm just not sure what I'm looking for, to be honest I've never used anything RAID. On-Board Intel ESB2 RAID controller - 0,1,5,10 SATA RAID Manufacturer: SuperMicro Model / Part Number: 6015P-TR Processor(s): Dual (2x) Intel Xeon 2GHz 5130 Dual Core 64-Bit Processors - 4MB C

ubuntu - RAID 1+0 vs RAID 0+1

I went with some advice I was given from someone I know to go with a RAID setup for this server I ordered. The specs are below. I plan on using this server to host multiple sites in a PHP/MySQL environment and an SVN repository in Ubuntu Server. I'd like to have a setup where the primary drive is mirrored so that in the event of failure on a drive the server could just use the other pair of drives. I'm reading on wikipedia about raid setups and I see RAID 0-5, but don't see a 10 listed on wikipedia. Perhaps I'm just not sure what I'm looking for, to be honest I've never used anything RAID. On-Board Intel ESB2 RAID controller - 0,1,5,10 SATA RAID Manufacturer: SuperMicro Model / Part Number: 6015P-TR Processor(s): Dual (2x) Intel Xeon 2GHz 5130 Dual Core 64-Bit Processors - 4MB Cache, 1333MHz FSB Memory: 4GB RAM (4x 1GB PC2-5300) - 8 slots on motherboard Hard Drive(s): Four (4) Hitachi 500GB 7200RPM SATA Hard Drives Optical Drive: DVD-ROM Floppy Drive: Inclu

apache 2.2 - Mixing SSL and non-SSL content in an Apache2 virtual host

itemprop="text"> I have a (hopefully) common scenario for one of my sites that I just can't seem to figure out how to deploy correctly. I have the following site and directories for example.com: These need to require SSL: /var/www/example.com/admin /var/www/example.com/order These need to be non-SSL: /var/www/example.com/maps These need to support both: /var/www/example.com/css /var/www/example.com/js /var/www/example.com/img I have two virtual host declarations for example.com in my /sites-available/example.com file; the top one is *:443 the second one is *:80. Since I have two vhost declarations, if a request comes in on 443, the top virtualhost is used, same with the bottom if it's a port 80 request. However, I can't seem to enforce my SSL requirements using SSLRequireS

apache 2.2 - Mixing SSL and non-SSL content in an Apache2 virtual host

I have a (hopefully) common scenario for one of my sites that I just can't seem to figure out how to deploy correctly. I have the following site and directories for example.com: These need to require SSL: /var/www/example.com/admin /var/www/example.com/order These need to be non-SSL: /var/www/example.com/maps These need to support both: /var/www/example.com/css /var/www/example.com/js /var/www/example.com/img I have two virtual host declarations for example.com in my /sites-available/example.com file; the top one is *:443 the second one is *:80. Since I have two vhost declarations, if a request comes in on 443, the top virtualhost is used, same with the bottom if it's a port 80 request. However, I can't seem to enforce my SSL requirements using SSLRequireSSL because I'm assuming a port 80 request to /admin or /order is not even hitting the *:443 vhost. Should I just Deny All to /order and /admin within the *:80 virtual host so that if you try to request it on 80, you&

nginx mutiple domain with ssl and www redirection

nginx redirection is not working as expected with ssl and www redirection. I have two domains say, domain1.com and domain2.com. I need to have https with www redirection on both ie https:/www.domain1.com. and https:/www.domain2.com. but when i take www.domain2.com its redirected to https:/www.domain1.com. Please see the configurations domain1.com server { listen 8080; server_name domain1.com; return 301 https://www.domain1.com$request_uri; } server { listen 443 default_server; ssl on; ssl_certificate /root/ssl/dom1/unified.crt; ssl_certificate_key /root/ssl/dom1/my-private-decrypted.key; root /var/www/dom1.com/html; index index.php index.html index.htm; server_name www.domain1.com; } domain2.com server { listen 8080; server_name domain2.com; return 301 https://www.domain2.c

nginx mutiple domain with ssl and www redirection

nginx redirection is not working as expected with ssl and www redirection. I have two domains say, domain1.com and domain2.com. I need to have https with www redirection on both ie https:/www.domain1.com. and https:/www.domain2.com. but when i take www.domain2.com its redirected to https:/www.domain1.com. Please see the configurations domain1.com server { listen 8080; server_name domain1.com; return 301 https://www.domain1.com$request_uri; } server { listen 443 default_server; ssl on; ssl_certificate /root/ssl/dom1/unified.crt; ssl_certificate_key /root/ssl/dom1/my-private-decrypted.key; root /var/www/dom1.com/html; index index.php index.html index.htm; server_name www.domain1.com; } domain2.com server { listen 8080; server_name domain2.com; return 301 https://www.domain2.com$request_uri; } server { listen 443; ssl on; ssl_certificate /root/ssl/dom2/unified.crt; ssl_certificate_key /root/

best practices - DDOS Attack Victim - How much to Admit?

itemprop="text"> Here's the environment: Website that hosts a forum/journal/bboard/email/socialmedia application in walled garden (ie you pay to get to use it or are invited to do so Many Clients pay to use the site during specific chunks of time (ie they lease access to site) in order to interact with their clients. There are dozens of clients in a broad range of fields. There is a very broad service level agreement. Meaning that it's not stated that the website can't go down for more than ten minutes but there's a gentleman's agreement that it won't. They don't pay for the 24/7 support be we give it to them because we love what we do. Site runs in 7 different languages throughout multiple time zones. Here's the situation: The site goes down at 5:30EST

best practices - DDOS Attack Victim - How much to Admit?

Here's the environment: Website that hosts a forum/journal/bboard/email/socialmedia application in walled garden (ie you pay to get to use it or are invited to do so Many Clients pay to use the site during specific chunks of time (ie they lease access to site) in order to interact with their clients. There are dozens of clients in a broad range of fields. There is a very broad service level agreement. Meaning that it's not stated that the website can't go down for more than ten minutes but there's a gentleman's agreement that it won't. They don't pay for the 24/7 support be we give it to them because we love what we do. Site runs in 7 different languages throughout multiple time zones. Here's the situation: The site goes down at 5:30EST and stays "offline" for approximately two hours due to DDOS attack. The clients reactions vary from annoyed to livid. The clients are also not very tech savvy. The clients are accustomed to 24/7 support an

security - Installing Terminal Server (Remote Desktop Services) on a Domain Controller (Active Directory)

itemprop="text"> From my research, I've come to understand that "Installing Terminal Server (Remote Desktop Services) on a Domain Controller (Active Directory)" is a cardinal sin - apparently there are some serious security risks. Could someone please elaborate and explain the risks? More specifically: How would someone go about compromising the server? What is the worst that could happen? Understand these aspects of my particular configuration: No files are being stored on the server. The directory is only being used to authorize users to use Remote Desktop Services. The server will be accessed by less than 50 users. Thank you. Answer The simplest things I can think of right off the bat: Start a process that fills the hard drives or RAM and crashes the server. More in

security - Installing Terminal Server (Remote Desktop Services) on a Domain Controller (Active Directory)

From my research, I've come to understand that "Installing Terminal Server (Remote Desktop Services) on a Domain Controller (Active Directory)" is a cardinal sin - apparently there are some serious security risks. Could someone please elaborate and explain the risks? More specifically: How would someone go about compromising the server? What is the worst that could happen? Understand these aspects of my particular configuration: No files are being stored on the server. The directory is only being used to authorize users to use Remote Desktop Services. The server will be accessed by less than 50 users. Thank you. Answer The simplest things I can think of right off the bat: Start a process that fills the hard drives or RAM and crashes the server. More insidious tactics would use everything from cache and side band attacks to malware and hacking toolkits to derive any and all information from AD, including potentially reversible passwords, security and other se

domain name system - Best Practices in Speeding-up DNS Propagation

I recently changed nameservers and it has been 24 hours since. Some of my visitors are complaining they are still viewing the old site while some are already seeing the new site. Is there any way to speed-up the DNS propagation without updating the hosts file of each of my visitors? Are there any best practices when it comes to changing nameservers to minimize this problem? Answer DNS records doesn't propagate in the sense that they aren't "pushed" from your server to other resolvers. What actually happens is that when other DNS servers look up your domain, they cache the record for X seconds so that they don't have to do another lookup for subsequent requests. X seconds should be determined by the TTL value on the record when it was retrieved from your name server. If y

domain name system - Best Practices in Speeding-up DNS Propagation

I recently changed nameservers and it has been 24 hours since. Some of my visitors are complaining they are still viewing the old site while some are already seeing the new site. Is there any way to speed-up the DNS propagation without updating the hosts file of each of my visitors? Are there any best practices when it comes to changing nameservers to minimize this problem? Answer DNS records doesn't propagate in the sense that they aren't "pushed" from your server to other resolvers. What actually happens is that when other DNS servers look up your domain, they cache the record for X seconds so that they don't have to do another lookup for subsequent requests. X seconds should be determined by the TTL value on the record when it was retrieved from your name server. If you've already changed the address there's nothing you can do but sit and wait. If you had planned this in advance, you could have lowered the TTL value. Some larg

load testing based on access log recording

itemprop="text"> I need to load test a saas system to find capacity+bottlenecks. My preferred method is to record a few 100,000s or millions of real acess log urls and run it as test with increasing hit rate. I've looked at several services all have their pros and cons. before I dive into them, which stress test service would you recommend specifically for the use case above? Answer you can use Apache Jmeter. the Best and Free alternatively, if you want to spend money, you can go for Hp LoadRunner.

load testing based on access log recording

I need to load test a saas system to find capacity+bottlenecks. My preferred method is to record a few 100,000s or millions of real acess log urls and run it as test with increasing hit rate. I've looked at several services all have their pros and cons. before I dive into them, which stress test service would you recommend specifically for the use case above? Answer you can use Apache Jmeter. the Best and Free alternatively, if you want to spend money, you can go for Hp LoadRunner.

networking - AD Dynamic DNS not working with Linux IPv6

I have an AD DNS/DHCP enabled server. I have mixed Windows and Linux machines/servers. IPv4 is working great with DNS/DHCP, even for the Linux systems. IPv6 is working great except that it won't generate a DHCP lease or DNS entry. If static IPv6 AAAA records are used, I can use my RAS server, browse IPv6 websites, and connect to any machine, even across the internet. What I can do with IPv6: I can get a DHCP IP address on any system, Linux or Windows. I have fd0a:fb5*:bdc*:0::x as my /64 prefix and it works great; ALL systems have an address with this prefix. can ping, DNS lookup, and connect to Windows systems and websites perfectly. The only issue I have is that IPv6 leases and DNS AAAA records are not added/updated dynamically for the Linux systems. A records and IPv4 leases all work fine. I have added a dedicated user called DHCPDynUpd

networking - AD Dynamic DNS not working with Linux IPv6

I have an AD DNS/DHCP enabled server. I have mixed Windows and Linux machines/servers. IPv4 is working great with DNS/DHCP, even for the Linux systems. IPv6 is working great except that it won't generate a DHCP lease or DNS entry. If static IPv6 AAAA records are used, I can use my RAS server, browse IPv6 websites, and connect to any machine, even across the internet. What I can do with IPv6: I can get a DHCP IP address on any system, Linux or Windows. I have fd0a:fb5*:bdc*:0::x as my /64 prefix and it works great; ALL systems have an address with this prefix. can ping, DNS lookup, and connect to Windows systems and websites perfectly. The only issue I have is that IPv6 leases and DNS AAAA records are not added/updated dynamically for the Linux systems. A records and IPv4 leases all work fine. I have added a dedicated user called DHCPDynUpd and given added it to the DNSUpdateProxy group. I then assigned a password that never expires and denied logon hours and disallowed signing into

networking - Linux clients and Windows Servers can connect but not windows clients

This is driving me insane because I can't make head or tails of it. We have two DCs (W2K3 SP1) and I'v tried this once on each machine as a sanity check. DHCP is being served by either one of the machines and all machines get an address no problem. The servers can connect/ping/browse to the www and so can all our linux clients. But NONE of our windows clients (all windows 7). I can do anything within the network, I can even ping the firewall/router but nothing from the windows clients is leaving the confines of our subnet. I don't get it. The linux and windows clients are both served from the same DHCP server, the gateway is the same, everything is the same. Anyone care to take a shot at how to resolve this? I tried adding explicit routes at the clients, but still no go. Some points that might help: This is b

networking - Linux clients and Windows Servers can connect but not windows clients

This is driving me insane because I can't make head or tails of it. We have two DCs (W2K3 SP1) and I'v tried this once on each machine as a sanity check. DHCP is being served by either one of the machines and all machines get an address no problem. The servers can connect/ping/browse to the www and so can all our linux clients. But NONE of our windows clients (all windows 7). I can do anything within the network, I can even ping the firewall/router but nothing from the windows clients is leaving the confines of our subnet. I don't get it. The linux and windows clients are both served from the same DHCP server, the gateway is the same, everything is the same. Anyone care to take a shot at how to resolve this? I tried adding explicit routes at the clients, but still no go. Some points that might help: This is behind a SonicWall firewall (which I absolutely despise). The DCs are two VMs on two different boxes. DHCP being provided by these VMs. There is maybe 1/2 dozen other VM

amazon ec2 - AWS EC2 Mailserver Failover Strategies done right

I'm researching in this topic really hard the last few days and i just want to discuss this with a few specific questions - i did not find any suitable thread here that is covering my needs and especially, that is quite actual - the most posts about this topic are around 2010 when, i guess, the last time AWS had a big failure (a whole region in murica was down when i remember right) The current state: We're running a Mailserver based on Ubuntu with Postfix/Dovecot/Horde, reading all mailbased configs out of a MySQL database. This is running as an EC2 instance with an EBS Storage where the OS and currently also the mails are stored. So far so good, but we're a startup and not just a private person who needs this server - so it is a Mailservice for our customers, super critical and verry important for us. After a few fails and downtimes in the fi

amazon ec2 - AWS EC2 Mailserver Failover Strategies done right

I'm researching in this topic really hard the last few days and i just want to discuss this with a few specific questions - i did not find any suitable thread here that is covering my needs and especially, that is quite actual - the most posts about this topic are around 2010 when, i guess, the last time AWS had a big failure (a whole region in murica was down when i remember right) The current state: We're running a Mailserver based on Ubuntu with Postfix/Dovecot/Horde, reading all mailbased configs out of a MySQL database. This is running as an EC2 instance with an EBS Storage where the OS and currently also the mails are stored. So far so good, but we're a startup and not just a private person who needs this server - so it is a Mailservice for our customers, super critical and verry important for us. After a few fails and downtimes in the first year, i will dramatically improve the setup - so i thought about "redundancy", basically.. The requirement: The server

Best practices for FQDN for standalone domain (is a two part domain.tld okay?)

itemprop="text"> I've searched quite a bit and can't seem to find a straight, modern answer on this. If I am hosting a domain, say, mydomain.com , on a machine which is going to solely be used for that domain, and there are no subdomains, is there a real, practical reason besides compliance to create an arbitrary hostname (i.e. myhost ) just in order to have a three-part FQDN ( myhost.mydomain.com ) to satisfy some sort of RFC or convention that's expected. This seems to make a lot of undue complexities from my perspective, and I'm not sure if there's an advantage to this or if it's just a hold-over from a time where all web resources came from a subdomains such as www and ftp which may need to scale to separate machines. I don't use www on my domain, either, which

Best practices for FQDN for standalone domain (is a two part domain.tld okay?)

I've searched quite a bit and can't seem to find a straight, modern answer on this. If I am hosting a domain, say, mydomain.com , on a machine which is going to solely be used for that domain, and there are no subdomains, is there a real, practical reason besides compliance to create an arbitrary hostname (i.e. myhost ) just in order to have a three-part FQDN ( myhost.mydomain.com ) to satisfy some sort of RFC or convention that's expected. This seems to make a lot of undue complexities from my perspective, and I'm not sure if there's an advantage to this or if it's just a hold-over from a time where all web resources came from a subdomains such as www and ftp which may need to scale to separate machines. I don't use www on my domain, either, which is ill-advised for all I know from an administrators perspective (though removing it is the norm from a designer's perspective)... Answer You should never give your server a name containing o