Skip to main content

Posts

Showing posts from October, 2019

domain name system - Pushing DNSSEC updates with offline keys

In a non-professional capacity, I look after the DNS of some 18 domains: mostly personal/vanity domains for immediate family. The whole shebang is outsourced to an inexpensive managed hosting provider who have a web interface through which I manage the zones. These domains are so unimportant that an attack targeted at them seems much less likely than a general compromise of my provider's systems, at which point the records of all their customers might be changed to misdirect traffic (perhaps with extremely long TTLs). DNSSEC could mitigate such an attack, but only if the zones' private keys are not held by the hosting provider. So, I wonder: how can one keep DNSSEC private keys offline yet still transfer signed zones to an outsourced DNS host? The most obvious answer (to me, at least) is to run one's own shadow/hidden master (from w

domain name system - Pushing DNSSEC updates with offline keys

In a non-professional capacity, I look after the DNS of some 18 domains: mostly personal/vanity domains for immediate family. The whole shebang is outsourced to an inexpensive managed hosting provider who have a web interface through which I manage the zones. These domains are so unimportant that an attack targeted at them seems much less likely than a general compromise of my provider's systems, at which point the records of all their customers might be changed to misdirect traffic (perhaps with extremely long TTLs). DNSSEC could mitigate such an attack, but only if the zones' private keys are not held by the hosting provider. So, I wonder: how can one keep DNSSEC private keys offline yet still transfer signed zones to an outsourced DNS host? The most obvious answer (to me, at least) is to run one's own shadow/hidden master (from which the provider can slave) and then copy offline-signed zonefiles to the master as required. The problem is that the only machine I (want to

MX Record Answer Contains Same Domain

itemprop="text"> What does it mean for an MX record to have an answer section that contains itself? My earlier belief was that this implies that a domain is it's own mail domain, but from running a couple experiments on web domains, I get connection timeouts when doing SMTP scans on domains that have MX records as below. Which RFC / where in an RFC contains this specification? $ dig -t mx yahoo.net ; <<>> DiG 9.11.3-1ubuntu1.7-Ubuntu <<>> -t mx yahoo.net ;; global options: +cmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 29654 ;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 1 ;; OPT PSEUDOSECTION: ; EDNS: version: 0, flags:; udp: 65494 ;; QUESTION SECTION: ;yahoo.net. IN MX ;; ANSWER SECTION: yahoo.net. 1800 IN MX 0

MX Record Answer Contains Same Domain

What does it mean for an MX record to have an answer section that contains itself? My earlier belief was that this implies that a domain is it's own mail domain, but from running a couple experiments on web domains, I get connection timeouts when doing SMTP scans on domains that have MX records as below. Which RFC / where in an RFC contains this specification? $ dig -t mx yahoo.net ; <<>> DiG 9.11.3-1ubuntu1.7-Ubuntu <<>> -t mx yahoo.net ;; global options: +cmd ;; Got answer:

amazon web services - Using Route53 to point apex/root domain to Heroku application

itemprop="text"> This is something that I've seen discussed in some other places, but this issue in particular hasn't been spelled out exactly as not being possible. I want to point an apex domain to a heroku app ( example.com to example.herokuapp.com ) CNAME doesn't seem to be possible, because they are not allowed at the apex level (sub domains are fine). ALIAS records seem to be an option, even though I don't fully understand them, or if they are standard. We use AWS Route53 as our DNS provider, but ALIAS records only seem to be possible to use for specific Amazon services (s3 website, load balancer, ..) So is it possible to point an apex domain to a Heroku app? Is my only other option to use another DNS provider? Thanks EDIT: I'm aware that I can CNAME www.example.com to

amazon web services - Using Route53 to point apex/root domain to Heroku application

This is something that I've seen discussed in some other places, but this issue in particular hasn't been spelled out exactly as not being possible. I want to point an apex domain to a heroku app ( example.com to example.herokuapp.com ) CNAME doesn't seem to be possible, because they are not allowed at the apex level (sub domains are fine). ALIAS records seem to be an option, even though I don't fully understand them, or if they are standard. We use AWS Route53 as our DNS provider, but ALIAS records only seem to be possible to use for specific Amazon services (s3 website, load balancer, ..) So is it possible to point an apex domain to a Heroku app? Is my only other option to use another DNS provider? Thanks EDIT: I'm aware that I can CNAME www.example.com to example.herokuapp.com , and then redirect from example.com to www.example.com using an ALIAS record, and an S3 site that redirects. But what we want is the exact opposite, we want the browser to show example

raid - New HDD swap in SAS 6 ir shows it is missing?

SETUP: Dell Poweredge T410 with Windows Server 2003, SAS 6 IR Raid-1. Both the original drives were Dell Certified Seagate Barracudas for enterprise, 250gb. The new HDD is a Seagate Barracuda 500gb, not Dell-certified. When I swap the failed drive with the new one, the SAS console shows that the drive status is "missing" -- and continues to show the primary drive from the original RAID-1 setup as the only one connected. Does the swapped drive need to be dell-certified to work, or am I doing something wrong?

raid - New HDD swap in SAS 6 ir shows it is missing?

SETUP: Dell Poweredge T410 with Windows Server 2003, SAS 6 IR Raid-1. Both the original drives were Dell Certified Seagate Barracudas for enterprise, 250gb. The new HDD is a Seagate Barracuda 500gb, not Dell-certified. When I swap the failed drive with the new one, the SAS console shows that the drive status is "missing" -- and continues to show the primary drive from the original RAID-1 setup as the only one connected. Does the swapped drive need to be dell-certified to work, or am I doing something wrong?

setting up a proxy to mirror an SSH SOCKS connection

I have two remote machines, remote1 and remote2. remote2 is only running sshd, and I can't run anything else on it. remote1 is a full-fledged server to which I have complete access. I can run a SOCKS proxy on remote2 via ssh -f -N -D *:8080 me@remote2 which lets me expose a SOCKS proxy on port 8080 on remote1. I'd like to authenticate this so that the proxy isn't sitting open. How can I do this? It seems like I should be able to use delegate , but I can't even seem to get its HTTP proxy functionality working. When I run delegated -r -P8081 SERVER=http PERMIT="*:*:*" REMITTABLE="*" I can't even get it to work on port 8081. Anyway, I was hoping someone could point me in the right direction to let me authenticate access to the SOCKS proxy connection? That is, I want to be able to point my browser's proxy at remote

setting up a proxy to mirror an SSH SOCKS connection

I have two remote machines, remote1 and remote2. remote2 is only running sshd, and I can't run anything else on it. remote1 is a full-fledged server to which I have complete access. I can run a SOCKS proxy on remote2 via ssh -f -N -D *:8080 me@remote2 which lets me expose a SOCKS proxy on port 8080 on remote1. I'd like to authenticate this so that the proxy isn't sitting open. How can I do this? It seems like I should be able to use delegate , but I can't even seem to get its HTTP proxy functionality working. When I run delegated -r -P8081 SERVER=http PERMIT="*:*:*" REMITTABLE="*" I can't even get it to work on port 8081. Anyway, I was hoping someone could point me in the right direction to let me authenticate access to the SOCKS proxy connection? That is, I want to be able to point my browser's proxy at remote1 and browse the internet through the SSH SOCKS proxy/tunnel to remote2. squid doesn't support a SOCKS parent =( Thanks!

linux - Why does sendmail changes the FROM domain and how to configure the proper one?

itemprop="text"> I installed Jenkins and configured it to send emails as being sent from "jenkins@jenkins.example.com" but the address is always changed when email is being sent and I receive the mail as being sent from "jenkins@default.vps.example.com". I've installed Sendmail and Jenkins is configured to use 127.0.0.1 as mail server. Any idea why Sendmail replaces the FROM domain when email is being sent? The hostname of the server is properly set (when I run hostname I do get "jenkins.example.com"). Same way if I send an email from command line echo "This is the body" | mail -s "Subject" u@d.com From where does it take this default.vps.example.com domain? Where is this default domain configurable? LE: in my sendmail.mc I have define( confDOMAIN_NAME&

linux - Why does sendmail changes the FROM domain and how to configure the proper one?

I installed Jenkins and configured it to send emails as being sent from "jenkins@jenkins.example.com" but the address is always changed when email is being sent and I receive the mail as being sent from "jenkins@default.vps.example.com". I've installed Sendmail and Jenkins is configured to use 127.0.0.1 as mail server. Any idea why Sendmail replaces the FROM domain when email is being sent? The hostname of the server is properly set (when I run hostname I do get "jenkins.example.com"). Same way if I send an email from command line echo "This is the body" | mail -s "Subject" u@d.com From where does it take this default.vps.example.com domain? Where is this default domain configurable? LE: in my sendmail.mc I have define( confDOMAIN_NAME', jenkins.domain.com')dnl and I generated the sendmail.cf file m4 sendmail.mc > sendmail.cf and restarted sendmail. Still doesn't work. LE2: ADDRESS TEST MODE (ruleset 3 NOT automat

windows server 2008 - HP DL380 G7 Disk swap to a HP DL380 G6

itemprop="text"> The HP DL380 G7 has 3 SAS disks in RAID 5 configuration. I need to change that server to another task and instead of make a clean install on the HP DL380 G6 can i just swap the 3 disks from HP DL380 G7 to the HP DL380 G6? I expect some driver issues maybe on the OS itself because the processor is different. They both use Smart Array P410i, if i power down the machines swap the disk by the same order will the RAID 5 configuration remain and the OS will boot? class="post-text" itemprop="text"> class="normal">Answer This should be no problem, as the RAID metadata is on the harddisks themselves. You only need to make sure that both controllers use the same firmware (suggested using the last, so you might want to upgrade first). Also see the Smart Array manu

windows server 2008 - HP DL380 G7 Disk swap to a HP DL380 G6

The HP DL380 G7 has 3 SAS disks in RAID 5 configuration. I need to change that server to another task and instead of make a clean install on the HP DL380 G6 can i just swap the 3 disks from HP DL380 G7 to the HP DL380 G6? I expect some driver issues maybe on the OS itself because the processor is different. They both use Smart Array P410i, if i power down the machines swap the disk by the same order will the RAID 5 configuration remain and the OS will boot? Answer This should be no problem, as the RAID metadata is on the harddisks themselves. You only need to make sure that both controllers use the same firmware (suggested using the last, so you might want to upgrade first). Also see the Smart Array manual by HP, page 81-82: "Moving drives and arrays": http://h20000.www2.hp.com/bc/docs/support/SupportManual/c01608507/c01608507.pdf

Two-Way SSL Authentication with Apache 2.2 and OpenSSL 1.0.1e-fips

I have a CentOS 6 server running Apache 2.2.15 with OpenSSL 1.0.1e-fips. I am trying to setup two-way SSL authentication for a specific location in my web root. A 3rd party has provided both a public (plain-text) and private (binary) certificate. I need some guidance on how to include both the public and private certs to get the handshaking working, as I am getting the following error: Re-negotiation handshake failed: Not accepted by client!? Here's what I have in my /etc/httpd/conf.d/ssl.conf file pertaining to this section: /api/path/> SSLCipherSuite ALL:!ADH:!EXPORT56:RC4+RSA:+HIGH:+MEDIUM:+LOW:+SSLv2:+EXP:+eNULL SSLCACertificateFile /etc/pki/tls/private/public.cer SSLVerifyClient require SSLVerifyDepth 10 SSLOptions +StdEnvVars +ExportCertData +OptRenegotiate Admittedly I am not an SSL

Two-Way SSL Authentication with Apache 2.2 and OpenSSL 1.0.1e-fips

I have a CentOS 6 server running Apache 2.2.15 with OpenSSL 1.0.1e-fips. I am trying to setup two-way SSL authentication for a specific location in my web root. A 3rd party has provided both a public (plain-text) and private (binary) certificate. I need some guidance on how to include both the public and private certs to get the handshaking working, as I am getting the following error: Re-negotiation handshake failed: Not accepted by client!? Here's what I have in my /etc/httpd/conf.d/ssl.conf file pertaining to this section: SSLCipherSuite ALL:!ADH:!EXPORT56:RC4+RSA:+HIGH:+MEDIUM:+LOW:+SSLv2:+EXP:+eNULL SSLCACertificateFile /etc/pki/tls/private/public.cer SSLVerifyClient require SSLVerifyDepth 10 SSLOptions +StdEnvVars +ExportCertData +OptRenegotiate Admittedly I am not an SSL expert. I know enough to get certs installed and working. I have turned logginf to 'debug' level. I have tried to follow these guides: http://www.stefanocapitanio.com/configuring-two-way-authen

Active Directory authentication with Apache: why I need to use a full name (user at domain)?

itemprop="text"> We use Apache 2.2 for authentication against Active Directory. The configuration is following: AuthFormLDAPURL "ldap://*.*.*.*:389/DC=domain,DC=com?userPrincipalName,sAMAccountName?sub?(objectClass=*)" Note: All traces below using WireShark. I defined the user john when the CN is equal to the sAMAcountMame name: src="https://i.stack.imgur.com/aWoxE.jpg" alt="enter image description here"> I can authenticate using only john (sAMAcountMame). Please find below LDAP bind request: src="https://i.stack.imgur.com/84bkm.jpg" alt="enter image description here"> LDAP bind response: src="https://i.stack.imgur.com/W1JRb.jpg" alt="enter image description here"> Then I defined the user johnd when the

Active Directory authentication with Apache: why I need to use a full name (user at domain)?

We use Apache 2.2 for authentication against Active Directory. The configuration is following: AuthFormLDAPURL "ldap://*.*.*.*:389/DC=domain,DC=com?userPrincipalName,sAMAccountName?sub?(objectClass=*)" Note: All traces below using WireShark. I defined the user john when the CN is equal to the sAMAcountMame name: I can authenticate using only john (sAMAcountMame). Please find below LDAP bind request: LDAP bind response: Then I defined the user johnd when the CN is NOT equal to the sAMAcountMame name: Unfortunately, I can not authenticate using johnd (sAMAcountMame). Please find below LDAP bind request: LDAP bind response: I can authenticate using a full name jonhd@domain.com. Please find below LDAP bind request: LDAP bind response: Questions: Why I can not authenticate using sAMAcountMame when the CN is NOT equal to the sAMAcountMame? Why I can authenticate using sAMAcountMame at domain in this case? Should we recommend to our users always to authenticate using sAMAcountMame a

routing - RFC 1918 address on open internet?

itemprop="text"> In trying to diagnose a failover problem with my Cisco ASA 5520 firewalls, I ran a traceroute to www.btfl.com and, much to my surprise, some of the hops came back as RFC 1918 addresses. Just to be clear, this host is not behind my firewall and there is no VPN involved. I have to connect across the open internet to get there. How/why is this possible? asa# traceroute www.btfl.com Tracing the route to 157.56.176.94 1 2 3 4 5 nap-edge-04.inet.qwest.net (67.14.29.170) 0 msec 10 msec 10 msec 6 65.122.166.30 0 msec 0 msec 10 msec 7 207.46.34.23 10 msec 0 msec 10 msec 8 * * * 9 207.46.37.235 30 msec 30 msec 50 msec 10 10.22.112.221 30 msec 10.22.112.219 30 msec 10.22.112.223 30 msec 11 10.175.9.193 30 msec 30 msec 10.175.9.67 30 msec 12 100.94.68.79 40 msec 100

routing - RFC 1918 address on open internet?

In trying to diagnose a failover problem with my Cisco ASA 5520 firewalls, I ran a traceroute to www.btfl.com and, much to my surprise, some of the hops came back as RFC 1918 addresses. Just to be clear, this host is not behind my firewall and there is no VPN involved. I have to connect across the open internet to get there. How/why is this possible? asa# traceroute www.btfl.com Tracing the route to 157.56.176.94 1 2 3 4 5 nap-edge-04.inet.qwest.net (67.14.29.170) 0 msec 10 msec 10 msec 6 65.122.166.30 0 msec 0 msec 10 msec 7 207.46.34.23 10 msec 0 msec 10 msec 8 * * * 9 207.46.37.235 30 msec 30 msec 50 msec 10 10.22.112.221 30 msec 10.22.112.219 30 msec 10.22.112.223 30 msec 11 10.175.9.193 30 msec 30 msec 10.175.9.67 30 msec 12 100.94.68.79 40 msec 100.94.70.79 30 msec 100.94.71.73 30 msec 13 100.94.80.39 30 msec 100.94.80.205 40 msec 100.94.80.137 40 msec 14 10.215.80.2 30 msec 10.215.68.16 30 msec 10.175.244.2 30 msec 1

windows server 2012 - HP ML350 G5 146gb 15k SAS 3.5" HDD Upgrade Options

itemprop="text"> Okay, spent a lot of time researching this and still can't quite find the exact answer I'm looking for. I have an HP ML350 G5 with (4) 146gb 1 port 15k SAS 3.5" hdd's and an HP E200i controller/original mobo 413984-001/Dual Xeon 5130's @ 2ghz/4gb ram. I'd like to buy the largest hdd's possible to replace the current drives (no data to preserve-fresh OS install) and fill all 6 empty slots. I "think" I read on another post on here that it's possible to go down to the 2.5" SAS drives? and then I could install 8 of them but I'm not totally sure on that. Any help would definitely be appreciated. Speed of the drives is NOT important as it will just be used for data archiving but I do still want to use Raid to protect the data. Is Raid 5 the best choice for this configuration? Also,

windows server 2012 - HP ML350 G5 146gb 15k SAS 3.5" HDD Upgrade Options

Okay, spent a lot of time researching this and still can't quite find the exact answer I'm looking for. I have an HP ML350 G5 with (4) 146gb 1 port 15k SAS 3.5" hdd's and an HP E200i controller/original mobo 413984-001/Dual Xeon 5130's @ 2ghz/4gb ram. I'd like to buy the largest hdd's possible to replace the current drives (no data to preserve-fresh OS install) and fill all 6 empty slots. I "think" I read on another post on here that it's possible to go down to the 2.5" SAS drives? and then I could install 8 of them but I'm not totally sure on that. Any help would definitely be appreciated. Speed of the drives is NOT important as it will just be used for data archiving but I do still want to use Raid to protect the data. Is Raid 5 the best choice for this configuration? Also, anyone know how much Ram this can be upgraded to? I checked on Ebay and it shows kits available with massive amounts of RAM but I want to be sure this old

Nginx WebSocket reverse proxy keeps return 200 instead of 101

itemprop="text"> I'm currently trying to have a hack.chat on my personal server working. Long story short, it consists of two servers. The first is a simple httpd server serving javascript and CSS. The second one, the chat system, is a node.js server which the javascript connects to using websocket. And here comes the problems. I want it all to use port 80, with a different domain name on a single IP, using a separate server block in Nginx. I followed href="http://nginx.org/en/docs/http/websocket.html" rel="nofollow noreferrer">the Nginx websocket doc but this is not working. When the websocket tries to connect, it always gets a 200 return code whereas, if I understood well, it should get 101 (switching protocol). My Nginx version is 1.8.0 and my server is running on gentoo with linux

Nginx WebSocket reverse proxy keeps return 200 instead of 101

I'm currently trying to have a hack.chat on my personal server working. Long story short, it consists of two servers. The first is a simple httpd server serving javascript and CSS. The second one, the chat system, is a node.js server which the javascript connects to using websocket. And here comes the problems. I want it all to use port 80, with a different domain name on a single IP, using a separate server block in Nginx. I followed the Nginx websocket doc but this is not working. When the websocket tries to connect, it always gets a 200 return code whereas, if I understood well, it should get 101 (switching protocol). My Nginx version is 1.8.0 and my server is running on gentoo with linux 4.0.5 Here is a dump of the relevant nginx conf files : nginx.conf: user nginx nginx; worker_processes 1; error_log /var/log/nginx/error_log info; events { worker_connections 1024; use epoll; } http { include /etc/nginx/mime.types; default_type application/octet-stream; lo

permissions - Issues with MongoDB install on Ubuntu 8.04 LTS

I am installing MongoDB (1.4.1) on Ubuntu (8.04 LTS) and I continuously have a problem where I can be in /usr/local/mongodb/bin and run ./mongo or ./mongod and I am returned "No such file or directory." Let me be very clear here... the files ARE there! The obvious go-to solution is that it is because of permission issues but the permissions are fine. I've even tried others out, still without any luck. I'm really at the end here and any help would be MUCH appreciated. Thank you!

permissions - Issues with MongoDB install on Ubuntu 8.04 LTS

I am installing MongoDB (1.4.1) on Ubuntu (8.04 LTS) and I continuously have a problem where I can be in /usr/local/mongodb/bin and run ./mongo or ./mongod and I am returned "No such file or directory." Let me be very clear here... the files ARE there! The obvious go-to solution is that it is because of permission issues but the permissions are fine. I've even tried others out, still without any luck. I'm really at the end here and any help would be MUCH appreciated. Thank you!

tomcat - Tomcat7 PPA for Ubuntu LTS 10.04.2

Does anyone know of a ppa / repo that will install tomcat7. I know I can install from source but I'd rather if possible use a PPA/Repo. I found a nice one for sun cough oracle-java but there does not seem to be anything for tomcat7 as of yet. I've installed tomcat6 without issue and like the layout. We are in the process of certifying our application for tomcat7 and I would like to have a solid production ready setup. Here is the the cheat for installing sun cough oracle-java on a clean Ubuntu LTS server: sudo apt-get install python-software-properties sudo add-apt-repository ppa:sun-java-community-team/sun-java6 sudo apt-get update sudo apt-get install sun-java6-jdk /> ubuntu@ubuntu:/etc/apt$ java -version java version "1.6.0_24" Java(TM) SE Runtime Environment (build 1.6.0_24-b07) Java

tomcat - Tomcat7 PPA for Ubuntu LTS 10.04.2

Does anyone know of a ppa / repo that will install tomcat7. I know I can install from source but I'd rather if possible use a PPA/Repo. I found a nice one for sun cough oracle-java but there does not seem to be anything for tomcat7 as of yet. I've installed tomcat6 without issue and like the layout. We are in the process of certifying our application for tomcat7 and I would like to have a solid production ready setup. Here is the the cheat for installing sun cough oracle-java on a clean Ubuntu LTS server: sudo apt-get install python-software-properties sudo add-apt-repository ppa:sun-java-community-team/sun-java6 sudo apt-get update sudo apt-get install sun-java6-jdk ubuntu@ubuntu:/etc/apt$ java -version java version "1.6.0_24" Java(TM) SE Runtime Environment (build 1.6.0_24-b07) Java HotSpot(TM) 64-Bit Server VM (build 19.1-b02, mixed mode)

replication - When and how should we shard MongoDB when we are bound to physical machines?

We maintain a search service that serves data from MongoDB. Our Mongo production instance is arranged in a 4 node replica set across four physical servers. The database is comprised of several small collections and one large collection. The large collection has the following characteristics: number of documents: 35 million average document size: ~4.2 kB collection size: 151 GB storageSize: 157 GB Over the next year we anticipate that the number of documents in this collection will double to ~70 million and a doubling in the size of the collection. I am conscious that the "Sharding Existing Collection Data Size" section of the href="http://docs.mongodb.org/manual/reference/limits/" rel="nofollow noreferrer">Mongo Reference Limits document, it's specified that &

replication - When and how should we shard MongoDB when we are bound to physical machines?

We maintain a search service that serves data from MongoDB. Our Mongo production instance is arranged in a 4 node replica set across four physical servers. The database is comprised of several small collections and one large collection. The large collection has the following characteristics: number of documents: 35 million average document size: ~4.2 kB collection size: 151 GB storageSize: 157 GB Over the next year we anticipate that the number of documents in this collection will double to ~70 million and a doubling in the size of the collection. I am conscious that the "Sharding Existing Collection Data Size" section of the Mongo Reference Limits document, it's specified that " For existing collections that hold documents, MongoDB supports enabling sharding on any collections that contains less than 256 gigabytes of data. MongoDB may be able to shard collections with as many as 400 gigabytes depending on the distribution of document sizes ". Consequently, we

domain name system - Changing DNS records when Moving a site

I'm building a new Magento website on a Amazon ec2 instance and will need to point domain of their old OSCommerce site to the new ec2 instance's elastic IP address. Normally I would have though this a simple task of updating the A record of their domain, but when I logged into the account with their register I see t hey have 90 records set up already , mostly CNAME & A records. They have no IT guy to ask, but I'm almost 100% sure what I need to do but as I normally work with web dev stuff like php and javascript etc I just want to make sure I have it right. To give you a sample of their DNS records they have set up: Type Host Data TTL Kind State In Synch A intweb1.their-domain.com 19?.??.???.OLD 3600 Manual  Active yes CNAME intweb.their-domain.com intweb1.their-domain.com 3600 Manual  Active yes CNAM

domain name system - Changing DNS records when Moving a site

I'm building a new Magento website on a Amazon ec2 instance and will need to point domain of their old OSCommerce site to the new ec2 instance's elastic IP address. Normally I would have though this a simple task of updating the A record of their domain, but when I logged into the account with their register I see t hey have 90 records set up already , mostly CNAME & A records. They have no IT guy to ask, but I'm almost 100% sure what I need to do but as I normally work with web dev stuff like php and javascript etc I just want to make sure I have it right. To give you a sample of their DNS records they have set up: Type Host Data TTL Kind State In Synch A intweb1.their-domain.com 19?.??.???.OLD 3600 Manual  Active yes CNAME intweb.their-domain.com intweb1.their-domain.com 3600 Manual  Active yes CNAME www.their-domain.com intweb1.their

amazon ec2 - Ubuntu 9.10 cron script 'ec2-consistent-snapshot' unable to execute and access files

itemprop="text"> I have a Ubuntu 9.10 image running on Amazon EC2 and I've setup a backup script ec2-consisten-snapshot. I'm able to run the script from SSH, and everything works peachy. sudo ec2-consistent-snapshot --mysql --xfs-filesystem /vol vol-xxxxxxx >>/mnt/backup.log 2>&1 However when I schedule a cron job in sudo crontab -e, the script runs but gives me errors. 12 18 4 2 * ec2-consistent-snapshot --mysql --xfs-filesystem /vol vol-xxxxxxx >>/mnt/backup.log 2>&1 ec2-consistent-snapshot: ERROR: Can't find AWS access key or secret access key at /usr/bin/ec2-consistent-snapshot line 76. xfs_freeze: cannot unfreeze filesystem mounted at /vol: Invalid argument ec2-consistent-snapshot: ERROR: xfs_freeze -u /vol: failed(2

amazon ec2 - Ubuntu 9.10 cron script 'ec2-consistent-snapshot' unable to execute and access files

I have a Ubuntu 9.10 image running on Amazon EC2 and I've setup a backup script ec2-consisten-snapshot. I'm able to run the script from SSH, and everything works peachy. sudo ec2-consistent-snapshot --mysql --xfs-filesystem /vol vol-xxxxxxx >>/mnt/backup.log 2>&1 However when I schedule a cron job in sudo crontab -e, the script runs but gives me errors. 12 18 4 2 * ec2-consistent-snapshot --mysql --xfs-filesystem /vol vol-xxxxxxx >>/mnt/backup.log 2>&1 ec2-consistent-snapshot: ERROR: Can't find AWS access key or secret access key at /usr/bin/ec2-consistent-snapshot line 76. xfs_freeze: cannot unfreeze filesystem mounted at /vol: Invalid argument ec2-consistent-snapshot: ERROR: xfs_freeze -u /vol: failed(256) The AWS access keys are located under $HOME/.awssecret and work fine if you don't run it from cron Can someone point me what I need to do, I've been trying to figure this out for past week. Also how do I troubleshoot xfs_freeze t