Skip to main content

Posts

Showing posts from November, 2015

ubuntu - Cron task doesn't run

I have this crontab task: $ sudo crontab -l -u root 0 22 * * * certbot renew --cert-name app.my_website123.net --pre-hook "systemctl nginx stop" --post-hook "systemctl nginx start" >> /home/my_user/cron_log1.log 2>&1 certbot renew requires root. The log file still doesn't exist, after several days, which implies that the task doesn't get executed. Why is that?

raid - PCIe SSD vs SSD vs SAS - which is best for Database?

Since I am going on shipping soon I wonder which generally will mostly boost the database performance ? The bridge question is - which will boost mostly a database which having a lot of random block readings and deleting records ? in a single drive performance : OCZ 240GB PCI Express RevoDrive 3 X2 SSD INTEL 240G SSD 530 15k SAS Raptor and the most interesting - the performance with the same drives in RAID 10 ? Answer PCIe Flash will offer significantly better overall bandwidth than SSD or HDD therefore should provide better sequential IO, this may not play as big a part in random IO as you might think but you should still see a degree of improvement over regular SSD, both of which will be very significantly better at this than HDD. Obviously cost/GB comes into play but that's your call. Bear in mind that some PCIe Flash disks aren't bootable.

virtualization - vSphere education - What are the downsides of configuring VMs with *too* much RAM?

VMware memory management seems to be a tricky balancing act. With cluster RAM, Resource Pools, VMware's management techniques (TPS, ballooning, host swapping), in-guest RAM utilization, swapping, reservations, shares and limits, there are a lot of variables. I'm in a situation where clients are using dedicated vSphere cluster resources. However, they are configuring the virtual machines as though they were on physical hardware. In turn, this means a standard VM build may have 4 vCPUs and 16GB or more of RAM. I come from the school of starting small (1 vCPU, minimal RAM), checking real-world use and adjusting up as necessary. Unfortunately, many vendor requirements and people unfamiliar with virtualization request more resources than necessary... I'm interested in quantifying the impact of this decision. Some examples from a "problem" cluster. Resource pool summary - Looks almost 4:1 overcommitted. Note the high amount of ballooned RAM. Resource allocation - The W

apache 2.2 - Apache2 - Forward a value from URL param OR cookie, whichever is present

I managed to use apache to strip off a url param and stuff it in a header to be passed on to another server. See this question for reference. Now I would like to add to this by getting the value for the header from a cookie if it's not present in the URL. Here's what I tried: RewriteEngine On RewriteCond %{QUERY_STRING} ^(.*)memberUuid=(.*)$ RewriteRule ^/(.*)$ http://127.0.0.1:9000/$1 [CO=memberUuid:%2:localhost,E=memberUuid:%2,P] RewriteCond %{HTTP_COOKIE} memberUuid=(.*) RewriteRule ^/(.*)$ http://127.0.0.1:9000/$1 [E=memberUuid:%2,P] ProxyPreserveHost On ProxyPass /excluded ! ProxyPass / http://127.0.0.1:9000/ ProxyPassReverse / http://127.0.0.1:9000/ Header add iv-user "%{memberUuid}e" RequestHeader set iv-user "%{memberUuid}e" This still works if the memberUuid is in the URL, but it doesn't seem to work with the cookie. I have the memberUuid cookie in my browser, but if I leave the URL param off, the iv-user header has an

security - I am under DDoS. What can I do?

This is a Canonical Question about DoS and DDoS mitigation. I found a massive traffic spike on a website that I host today; I am getting thousands of connections a second and I see I'm using all 100Mbps of my available bandwidth. Nobody can access my site because all the requests time out, and I can't even log into the server because SSH times out too! This has happened a couple times before, and each time it's lasted a couple hours and gone away on its own. Occasionally, my website has another distinct but related problem: my server's load average (which is usually around .25) rockets up to 20 or more and nobody can access my site just the same as the other case. It also goes away after a few hours. Restarting my server doesn't help; what can I do to make my site accessible again, and what is happening? Relatedly, I found once that for a day or two, every time I started my service, it got a connection from a particular IP address and then crashed. As soon as

domain name system - Impact of changing DNS glue records from registrar

I have three domains from three different registrars (e.g. example.com , example.net , example.org ). The DNS records for each domain is handled separately using the control panel of each registrar. I wish to centralize all zones to a single service by changing all Glue Records to point to a single DNS server say ns1.example.net . The procedure i end up is: Create each zone on the new server. "Copy" all records ( A , AAAA , MX e.t.c) from registrars to the equivalent zone. Change glue records on all domains to point to the new server ns1.example.net However, I am troubled with the following: Any cached A -Record queries from clients or resolvers shouldn't be an issue since the records of my NS server will point to the same IPs just as the original records did. Is that correct? What about the NS record queries? Do clients or recursive DNS Servers cache NS record queries? If this is the case, then, there is a possibility as soon as I change the glue record, clients

apache 2.2 - setting permissions in user's home folder, to stop a 403 error

In my root folder on my school account I have a bunch of files and folders, e.g., $ ls foobar.txt Private Public www By default, all the files in Public/ and www/ are viewable on the web via myschool.edu/myusername/Public and myschool.edu/myusername/www. But how do I make foobar.html viewable? I want myschool.edu/myusername/foobar.txt to return the actual contents of foobar.txt, instead of a 403 error. I tried setting permissions on foobar.txt (chmod 755 foobar.txt, and I even tried chmod 777 foobar.txt, so that the current permissions are -rwxrwxrwx), but I still get a 403 error. If it matters, it looks like my school uses Apache/1.3.41 Server. Reason I'm asking I have no idea how, but I had a "stuff" folder in my home directory, and Google somehow managed to index and crawl its contents. So "myschool.edu/myusername/stuff/grr.html" now appears in Google's search results and cache, even though I still get a 403 error if I try to access the file myse

domain name system - Exchange 2010 OWA send connector issue

I have problems sending emails outside organization using OWA (Exchange 2010 installed on SBS 2010, domain is domain.local, local DNS). Web-server and email server are hosted remotely, I am using pop connector to grab emails to Exchange 2010. I have created two send connector, one that handles internal emails and one for external email. The problem is connector for emails over internet, which is configured like: Addres Space SMTP, *, 1 Network Route email throught the following route host Smart host authentication: Basic authentication over TLS + username, password If I don't use smart host I get the following error 451 4.4.0 Primary target IP address responded with: “454 4.7.5 Certificate validation failure.” If I use smart host when using OWA outside of organization (different ISP), message gets stucked in a queue without being sent. What am I doing wrong when configuring send connector? Answer You likely only need one send connecter, and since your prima

tcpip - RPC SERVER is unavailable

windows 2003 is getting this error. restart the machine or just the network connection fix the problem. date and time is the same one. tcp/ip is started and running remote registry is running the changes made lately to the network is that the DC crashed and went to IT for restore and antivirus is not up to date because license issue (purchase delay) what can be fix the problem?

Using NTP to sync a group of linux servers to a common time source

I have 20 or so linux servers and I want to sync all of their clocks to a single NTP server, which is in the same rack and switch as my servers. Nothing is virtualized. Our admins are having trouble getting the clocks on the various machines synced closer than about 500 ms. I would have guessed, and this post implies that we should be able to get the linux boxes synced to within 2 ms of of the source and each other. Are my expectations for NTP unreasonable? Any hints as to what the admins should be doing/checking?

storage - What is the difference between a HBA card and a RAID card?

I thought I knew the difference between HBA and RAID. In my mind, HBA is offloading from the main motherboard/CPU and is simply JBOD... usually has an external SAS ports, whilst a RAID card does the same job as HBA but adds all the nice RAID levels and possibly battery backup + other benefits. After looking at the LSI website for a product, I see that they have HBA cards that have built in RAID, for example the LSI SAS 9211-8i Host Bus Adapter . So... Clearly I am wrong! What is the difference between an HBA card and a RAID card?

regex - nginx return 301 / redirect

among all 'redirect in nginx' questions I couldn't find how to redirect (using return 301 and better no ifs) using regexps. I have a link to my website and I'd like to remove parameter a the end: domain.com/article/some-sluggish-link/?report=1 #number at end Regex to find this: \?report=\d*$ For this i want 301 redirect to: domain.com/article/some-sluggish-link/ I nginx.conf I have 3 redirections: server { listen 80; server_name subdomain.example.com.; #just one subdomain } server { listen 80; server_name *.example.com; return 301 http://example.com$request_uri; } server { listen 80; server_name example.com; } and it works; it redirects 301 all www., ww., aaa., and every subdomain, except 1 particular, to main domain.com I'd appreciate any help Cheers! EDIT 25/03/2015 I already have "location /" in my conf file: location / { uwsgi_pass unix://opt/run/ps2.sock; include uwsgi_pa

networking - Why don't more organizations use inside-to-inside NAT or similar solutions to allow NAT hairpins?

Inside-to-inside NAT aka NAT loopback solves hairpin NAT issues when accessing a web server on the external interface of an ASA or similar device from computers on the internal interface. This prevents DNS admins from having to maintain a duplicate internal DNS zone that has the corresponding RFC1918 addresses for their servers that are NATted to public addresses. I'm not a network engineer, so I might be missing something, but this seems like a no-brainer to configure and implement. Asymmetric routing can be an issue but is easily mitigated. In my experience, network admins/engineers prefer that systems folks just run split-dns rather than configuring their firewalls to properly handle NAT hairpins. Why is this? Answer There are a few reasons why I wouldn't do it: Why put extra strain on the DMZ routers and firewall if you don't need to? Most our internal services are not in the DMZ but the general corporate area, with proxying services in the DMZ for occasio

storage - ZFS stripe on top of hardware RAID 6. What could possibly go wrong?

I have 36*4TB HDD SAN Rack. RAID controller did not support RAID60 and not more than 16 HDDs in one RAID group. So I decided to make 2 RAID6 groups of 16HDD or 4 of 8 HDDs. I want to get all storage as one partition. So, what could possibly go wrong if I will use zfs pool on top of hardware RAID6? Yeah, I know that it is strongly recommended to use native HDDs or pass-through mode. But I have not this option. Or should I stay away from ZFS and software raids in this situation? (I'm mostly interested in compression and snapshots)

hp - Proliant ML310e Gen8 Smart Array Predicted Failure Issue with SSD

This is a question related to this one: Third-party SSD solutions in ProLiant Gen8 servers but not covered by the question or the answers. I have an OCZ 120 gig SSD as system drive on a Proliant ML310e Gen8 server, sitting on SATA port 6. In order to use that port (on the motherboard) for the SSD, I must set the SATA controller to SMART Array controller. This gives me all 4 bays for a RAID set of HDD drives. So far, so good. I've also installed Win Server 2008 R2 on the system drive and all works well. BUT - when I chack the SMART Array, it say "predicted failure of drive 0 (SSD)". I have checked and verified the actual SMART settings for the SSD, and the drive is 100% OK. It's brand new, and the SMART settings have been verified as 100% OK by OCZ support. I cannot clear this error on the HP diagnostic side, and at OCZ's suggestion I have been trying to figure out how to turn off SMART diagnostics on the HP, to no avail. SO - how can I either clear the HP SMART Ar

What's the advantage of using a bash script for cron jobs?

From my understanding you can write your crons by editing crontab -e I've found several sources that instead refer to a bash script in the cron job, rather than writing a job line for line. Is the only benefit that you can consolidate many tasks into one cron job using a bash script? Additional question for a newbie: Editing crontab -e refers to one file correct? I've noticed that if I open crontab -e and close without editing, when I open the file again there is a different numerical extension such as: "/tmp/crontab.XXXXk1DEaM" 0L, 0C I though the crontab is stored in /var/spool/cron or /etc/crontab ?? Why would it store the cron in the tmp folder? Answer It depends on what you're doing with the job. cron does not give you a real scripting environment, so, if you're doing something more complicated than simply calling a couple of commands, you probably want to use cron to call a script. You can also deal with things like variable expansion in a

pci dss - Ubuntu PCI-DSS Compliance Issue

I'm trying to get PCI compliant and the PCI scanning company is flagging our Ubuntu 12.04 PHP 5.3.10-1ubuntu3.9 for CVE-2013-1635. According to http://people.canonical.com/~ubuntu-security/cve/2013/CVE-2013-1635.html the Ubuntu response is "We do not support the user of open_basedir" and all version have been marked as ignored. I'm at a loss for what to do here. I've pointed my scanning company to this same URL, but they don't accept that as and answer. What should I do? Update I do not use this functionality and the open_basedir directive is disabled in php.ini. However, they do not consider this a proper solution. Here is their response to their denial of my dispute: We have denied this dispute based on the information provided regarding how this finding has been addressed. The version of PHP that is currently running on this system has been found to not properly sanitize user-supplied input. Despite the fact that 'open_basedir' is disabled on thi

security - What should every sysadmin know before administrating a public server?

Similar to this question on Stack Overflow, what should a sysadmin who is used to private, intranet-type situations know before being the administrator of a public site? These could be security related things like "don't leave leave telnet open," or practical things like how to do load-balancing for a high-traffic site. Answer Every app, every binary, every package that exists on the server is a liability. Subscribe to the 'least bit' principle; if it's not installed, it can't be compromised. Implement intrusion detection, such as Tripwire or similar, and scan frequently. Invest in a hardware firewall and only open the ports you need for your application. Do not allow your administration ports (ssh, rdp etc) to be publicly visible; restrict them to approved management IP addresses. Have backups of your firewall/switch/router configurations at the time of going into production. If one of those devices is compromised, it is signific

networking - Static IP settings wrong? Why?

I have this issue when I try to copy files over my network (from PC to NAS). The first file seems to copy without a problem until it reaches 99%. It hangs for minutes and fails eventually. Let me summarize my equipment: NAS: Brand new Synology RS815 with 4x3TB in RAID10 configuration Transfer medium: CAT6 cabling Switch: Cisco SG500-28P Patch Panel: Tried T568A and T568B termination on the patch panel. No difference there. I'm building up my network and connected already a few cables to the patch panel. My PC is connected to the switch without use of the patch panel (RJ45 connector on the cable). The NAS is placed near my PC for testing and is connected to a cat6 wall socket. The other end of the cable is connected to the patch panel where it is patched to the switch. I can browse the NAS, manage the NAS by web interface I believe it has something to do with the wiring on the patch panel or with the switch. When I connect the NAS on another small switch near my pc, I don't have

nat - Iptables string

I have an iptables rule like this: iptables -t nat -I PREROUTING -p tcp --dport 80 -s 192.168.1.2 -j DNAT --to-destination 192.168.1.1:80 it works perfectly.. but I want to redirect only for one URL like this: iptables -t nat -I PREROUTING -p tcp --dport 80 -s 192.168.1.2 -m string --string "google.com" -j DNAT --to-destination 192.168.1.1:80 which does not work in any way... please help me with this

storage - IO/s and MB throughput for Various RAID Arrays

Are there any resources that have typical throughput and IOp/s for various RAID arrays under sequential and random patterns? In my case I am more specifically interested in at the moment : 6 Disk Raid 10 Array with SAS 10k drives sqlio numbers I know there are a lot of variables here, how many operations are pending, the controller, caches etc... I have also seen the "formulas" for predicting RAID performance (which I kind of feel are perhaps a bit of malarkey) but some general targets of what good benchmarks are would be helpful.

domain name system - High Availability DNS Hosting Strategy?

I'm trying to find a few options of ways to do high availability DNS hosting for a few existing websites. This morning, the company I work for was brought to its knees because the DNS hosting we have for our domains through our registrar ( bulkregister.com ) went down. I'm now being tasked with finding an alternative which will not put us at the mercy of a single DNS provider. What we're looking for: No single point of failure. Time effective. One solution that has been suggested is to do multiple DNS hosts. This seems like a great alternative, but we have over 20 domains, and updating an IP address on all of those domains across two providers is prohibitive. Cost effective. I have to sell this to upper management. Joy is me. So what methods exist which support this? I'm more of a programmer myself, but they've tasked me with this, so I wanted to get the opinion of people more experienced than I am. Answer You can use any number of DNS hosters tha

java - Configure Database Properties for Tomcat Webapp

I setup a sample application in Tomcat but have trouble getting the database connection working. It is a standard WAR package written in Spring framework and uses a MySQL database. The application is the Granny Address Book from http://www.cumulogic.com/downloads/sample-applications/ . I have deployed it under tomcat/webapps/ (Tomcat 7.0.42 with MySQL 5.1.73 running on the same host.) Mysql DB name: grannydb JNDI Name: MySqlGBDS I cannot locate where to place the database connection settings, as it does not have the usual database.properties file. The only reference to database settings are in granny.xml : MySQL-5.5.27 1 10 demo demodemo 3306 UTF-8 But this file is not packaged inside the webapp (it comes separately,) and it's lacking a database host name. I tried placing granny.xml inside the webapp, under WEB-INF/classes/META-INF/spring/ but it fails to connect to the database. The current behavior is the webapp is starting but catalina.out

CNAME for top of domain?

Is it possible to set a CNAME record at the top of a domain? (i.e. @ CNAME www , @ CNAME foobar.com. , etc.) My ISP says that it's only possible to use CNAME's for subdomains but I've read somewhere else that is should be possible even if not recommended. Answer Not possible - this would conflict with the SOA- and NS-records at the domain root. From RFC1912 section 2.4: "A CNAME record is not allowed to coexist with any other data."

cisco - Puppet Error: undefined method 'captures'

I posted this question on the networkengineering SE site, but it was determined to be off topic.... blah. I'm toying with the idea of using puppet for core network device configuration to increase accuracy of the configs my team is generating. I wanted to start by setting up a demo and learning more about how puppet works in general. I installed puppet on our teams networking utility node (an Ubuntu 12.04 LTS VM) and configured a single device in my ~user/.puppet/device.conf which looks something like.... [XX-core01.XXX.local] type cisco url ssh://user:reallygoodpassword@XX-core01.XXX.local/ I ran puppet device --verbose, and issued a cert. But once I did, I got an error that I'm unable to find any information about. info: starting applying configuration to XX-core01.XXX.local at ssh://user:reallygoodpassword@XX-core01.XXX.local/ info: Creating a new SSL key for XX-core01.XXX.local info: Caching certificate for ca info: Creating a new SSL certificate request for

domain name system - Is it bad practice to have a CNAME that does not match SSL cert CN?

This is the scenario... $ openssl x509 -subject -noout -in cert.crt subject= /CN=example.org $ curl -I "https://example.org" HTTP/1.1 200 OK $ dig example.org example.org. 60 IN A 192.0.2.1 $ dig foo.example.org foo.example.org. 60 IN CNAME example.example2.com. example.example2.com. 60 IN A 192.0.2.1 $ curl -I "https://foo.example.org" HTTP/1.1 200 OK $ curl -I "https://example.example2.com" curl: (51) Unable to communicate securely with peer: requested domain name does not match the server's certificate. When the alias foo.example.org is used I don't 'see' the CNAME example.example2.com . So it doesn't really matter that the CNAME doesn't match the cert name. But I was asked by someone which of the three names should be used. My answer was all three can be used except example.example2.com for SSL. That seems strange to me from an end user's perspective. The cert t

apache 2.4 - Default VirtualHost with Apache2

I have a web server using Apache 2.4.10 . Let's assume I have several domain name : www1.example.com. www2.example.com. www3.example.com. I want my web server to be only contacted using these domain names. This mean that I don't want people connecting to my server using HTTP by typing my IP address, or using another domain name (let's assume there is a lot of wwwX.duplicate-example.com created by another person and pointing to my IP). At least, I want them to connect to a default 404 page I'll set under /var/www/404/index.html . At this moment, I can run my 3 wwwX.example.com. website with separate page using VirtualHost : DocumentRoot "/www/www1" ServerName www1.example.com # Other directives here ... DocumentRoot "/www/www2" ServerName www2.example.com # Other directives here ... DocumentRoot "/www/www3" ServerName www3.example.com # Other directives here ...

RAID array Considerations - Any advice?

We just bought a Dell PowerEdge r510 (12 drive bays) that will fill the role of an archive server. We have 6 drives (1TB each) installed. The plan is to have all the drives in a single RAID array and carve out an OS partition and an Archive partition. We intend to expand to all 12 drives, but need to preserve the main archive partition when we do (i.e. we'd like to add more drives and expand the space available on the archive partition, not create another array or another partition to allocate the additional space). Questions: Is there a good way to do this (if at all)? What would the preferred RAID type be if it's possible (5, 1+0, etc.)? Answer If I could suggest a slight modification of your plans: Put the OS on two smaller disks and mirror them. Create a second array, preferably RAID 6 with a hot spare, and make it a dynamic partition within Windows so you can expand later. Don't dynamically expand volumes that are on the system disk. I've heard bad t

apache 2.2 - CA SiteMinder Configuration for Ubuntu

I receive the following error when attempting to start apache through the init.d script: apache2: Syntax error on line 186 of /etc/apache2/apache2.conf: Syntax error on line 4 of /etc/apache2/mods-enabled/auth_sm.conf: Cannot load /apps/netegrity/webagent/bin/libmod_sm22.so into server: libsmerrlog.so: cannot open shared object file: No such file or directory SiteMinder does not officially support Ubuntu, so I am having trouble finding any configuration documentation to help me troubleshoot this issue. I successfully installed the SiteMinder binaries and registered the trusted host with the server, but I am having trouble getting the apache mod to load correctly. I have added the following lines to a new auth_sm.conf file in /etc/apache2/mods-available and symlinked to it in /etc/apache2/mods-enabled: SetEnv LD_LIBRARY_PATH /apps/netegrity/webagent/bin SetEnv PATH ${PATH}:${LD_LIBRARY_PATH} LoadModule sm_module /apps/netegrity/webagent/bin/libmod_sm22.so SmInitFile "/etc/apache2

Upgrade Apache 2.2.22 to 2.2.26 on Ubuntu 12.04 LTS

I've been scouring the internet for what should be a seemingly simple process, but I can't seem to get this to work. I currently have Apache 2.2.22 running on my Ubuntu Server, and I simply need to upgrade to the latest release. Ubuntu has not updated their repository yet, so I can't use apt-get (sadly). I found this post detailing how to install 2.4.3 from a .tar.bz2 so I figured I would try that. I uninstalled Apache 2.2.22 and followed all the steps (but used a .tar for 2.2.26 of course). When I run /etc/apache2/bin/apachectl start , it doesn't complain, but the web server doesn't appear to be working. Going to website just results in Chrome saying "The page could not be displayed". No error 500, nothing. (I should note I ran configure like this: ./configure --prefix=/etc/apache2 ) Running service apache2 start simply results in it saying No Apache MPM Package Installed. Any ideas on how to perform this update? Answer Debian/Ubuntu packa

iis - Redirection, subtitute or rewrite

we have a homepage hosted by another provider in amazon, we are developing and event related page, due a SEO needs we need a redirection from http://www.example.com/event (hosted on amazon, out of our control) to http://event.example.com (hosted on our servers), but we need that the url on the user browser maintains http://www.example.com/event showing the content of http://event.example.com . The web page develop by us in http://event.example.com is an .net IIS page, so we guess that between amazon page and our .net page we need an apache reverse proxy and probably mod_substitute/mod_rewrite help, what would be the necessary apache rules? Also any other suggestion as IIS rewrite approach would be appreciated. Thanks Answer Sorry, I mean we does not manage this service, they only do a redirection. A "redirection" (as in an external redirect ) is presumably not desirable for SEO reasons. An IFRAME (as @bjoster suggests) would require control of the content

How do I configure a naked domain and a mail server to a cloud hosted app?

I have a web app with no fixed IP address: myapp.apps.com And a custom domain: example.com I have the www subdomain (www.example.com) pointing to the web app with a CNAME record. I have the mail server hooked up with MX records (me@example.com). What I want is to be able to hit the naked domain in the web browser (example.com). How do I do this? Bear in mind that if I point a CNAME record to the naked domain it will override the mail server settings. I have access to advanced DNS settings (A, MX, CNAME, TXT, SRV).

windows - Slow internet connection for no reason

We have a cable internet connection. I've called the company when this happens and they say there's been no downtime on the modem and the signal is strong and there's no fixing or outages in the area. I mostly trust them on that. Cable modem -> 4 port gigabit Netgear switch -> Sonicwall -> Cisco 2960S -> Dell T410 DC running Server 2008R2 with basic settings Currently have about 50-75 connected computers or servers. Internet is great for a week or two, but then (at random) it gets really slow for about half an hour. Then back to normal speed. What would cause this out of the gear listed? Is there a tool for figuring this out?

networking - DNS how to dynamically bind subdomains to ip and port pairs

I'm using bind9 I have a zone file setup with a few subdomain A records. I'm wondering if I can create a subdomain record that points to an ip address on a specific port. I'm currently updating the zonefile dynamically. I'd need some service with an API that can handle an infinite amount of subdomains. Also need to map udp and non http tcp ports. I think I might need to use ip tables? Answer No, DNS only takes care of name resolution to an IP address. It does not offer the capability to map to a port. This question has a more detailed explanation -- How to use DNS/Hostnames or Other ways to resolve to a specific IP:Port Some DNS services allow URL forwarding, which I think should do what you are looking for. For eg NameCheap offers this feature -- https://www.namecheap.com/support/knowledgebase/article.aspx/545/51/how-do-i-set-up-url-forwarding-when-i-use-your-free-dns-service

installation - Boot and Install Windows from a USB thumb drive

Installing Windows from a thumb drive is vastly superior to burning a copy to a DVD which will fill some landfill somewhere with toxic stuff. Not to mention it's about 50x faster to install Windows from a USB Thumb Drive. How do you get the bits onto the thumb drive so that you can boot from it and do a clean install? Answer Update: Microsoft has created the Windows 7 USB/DVD Download tool to make this very easy. I used this guide as a set of directions - http://kurtsh.spaces.live.com/blog/cns!DA410C7F7E038D!1665.entry 1. Get a USB Thumbdrive between 4-32GB. If the drive is larger than 32GB, Windows cannot format it as FAT32, so an alternate utility must be used. Windows can still read FAT32 partitions larger than 32GB, though some devices cannot. 2. Run cmd.exe as administrator and enter the following commands followed by Enter diskpart list disk select disk # (where # is your USB drive as determined from step 2) clean (This step will delete all data on your fl

how can I boot linux from a software raid 1 array

I'm trying to make a raid array on an existing linux ubuntu install. I'm following this tutorial... http://howtoforge.org/software-raid1-grub-boot-fedora-8 After going through the list of things a million times I finally understand what's going on. You make the raid device, on your new blank drive, copy your old / drive to it, set up the grub menu.lst, fstab, mtab initrd and grub MBR to all point to the raid device (which I have defined and is working) and then you reboot. Once you've booted, you now run in the raid device (/dev/md0) Then you merely hook your original drive up to the raid array, it syncs and voila you're done. So I set up my menu.lst to primarily load the kernel and initrd from the raid device, and failover to my original (still intact) old disk. And it always fails over when I reboot. I boot the machine, run my new grub entry and it says "error 15 file not found." Lots of stuff on the web about it, none seem to help. The only th

security - Is there another way to run Apache2 securely for end users without using CGI mode?

Is there another way to run Apache2 securely for multiple end users (like hosting hundreds of blogs) without using CGI mode as required by suPHP? It just seems so inefficient to use CGI mode for PHP when if we could set up permissions properly, we could host PHP through mod_php perhaps? I mean, I do want to restrict these users to their home directories for their sites, but don't want any security issues. Answer PHP support FastCGI out of the box, so you could use mod_fastcgi or mod_fcgid and SuExec to run PHP scripts. It has almost the same performance as mod_php but still will run the scripts in individual user contexts. You should also read this article series about securing shared hosting platforms.

Hosting many Ruby on Rails Applications/Sites

We are looking to host some 50-100 rails apps. What would be the best server model to handle this? By server model, I mean like several load balanced servers or small VPS per site etc. I've used "mod_rails" and a good estimate is that each site is going to run at around 100mb of memory. Any suggestions would be greatly appreciated.

iis 7 - ssl certificate should be added in more than one web site in IIS 7.0

I have purchased one SSL certificate which should run on domain and subdomain. E.g. *.example.com. I want this SSL certificate to run in domain website and subdomain website. I have two different Website in IIS 7.0 example.com xyz.example.com Currently the SSL certificate is assigned to xyz.example.com and it is working very well. Now I want this certificate to work in example.com website as well. I have added the certificate and it has taken default port for https i.g. 443. Now when I have start the website, it gives me below message: This Web site cannot be started. Another Web site may be using the same port. Now, I have changed the default port of https i.g. 443 of example.com to 445. and tried to start the website and it gives me below message: The process cannot access the file because it is being used by another process. (Exception from HRESULT: 0x80070020) Can you please help me in this. Thanks in advanced Pranav