Skip to main content

Posts

Showing posts from July, 2014

scripting - Bash if statement equal output from last command

I am trying to equal something from last command with bash if statement: #!/bin/bash monit status if [ "status" != "error" ]; then echo -e "hostname\ttest\t0\t0" | /usr/sbin/send_nsca -H hostname -c /etc/send_nsca.cfg exit 1; fi Even if the monit status gives out status = online with all services it runs the echo command. I can not figure out how to make the if statement match the status of monit status output. Answer You are comparing the static strings status vs. error . There are several ways to go about this. For capturing the output of the command in a variable, use STATUS=`monit status` or STATUS=$(monit status) For a simple case as your's, I would go for a simple if monit status | grep -q error ; then ... fi

domain name system - Whether all attempts to determine the ip address successfully if Master DNS shutdown and secondary DNS work

Domain exemple-domain.com has two DNS server: dns1.exemple.com (master) and dns2.exemple.com (slave). questions: If dns1.exemple.com temporarily disabled and dns2.exemple.com works, is it possible that some attempts to define ip addresses fails? I try it many times in succession dig @8.8.8.8 exemple-domain.com | awk '/^;; ANSWER SECTION:$/ { getline ; print $5 }' and part of the query does not receive a reply Could there be a part of that traffic would be lost if dns1.exemple.com not working and dns2.exemple.com work?

filesystems - delete millions of files within a directory

The other day I ran bleachbit on my system. I had enabled wipe disk space option in it. It took several hours and my disk space filled up completely (100GB or so). After waiting forever, I decided to terminate the program and delete the files manually. Now the problem: I'm not able to delete the files or the directory. I cannot do an ls within the directory. I tried rsync -a --delete, wipe, rm, different combos of find & rm, etc I followed the instructions here and noticed the "Directory Index Full!" error in my logs as well. rm on a directory with millions of files I noticed that the stat command returned an unusually large directory size of more than a GB. Usually it's just 4096 or something around tens of thousands. nameh@labs ~ % stat kGcdTIJ1H1 File: ‘kGcdTIJ1H1’ Size: 1065287680 Blocks: 2080744 IO Block: 4096 directory Device: 24h/36d Inode: 9969665 Links: 2 Access: (077

email - How to setup a reliable SMTP server on Windows Server 2008 R2

I know there are SMTP services out there which you can pay to send e-mails with but surely it's not that difficult to set up one of your own. How can I set up an SMTP server on Windows Server 2008 R2 that is: - Secure; only authorized users/hostnames/etc can send mail - Reliable; e-mails don't get lost - Not treated as spam; when e-mails are received from say gmail/outlook/hotmail they don't go straight to junk ** ** I understand this depends both on the server+e-mail headers AND e-mail content - I'm looking to safeguard the server part. Thanks!

hard drive - Configuring RAID in HP Proliant ML350e Gen8

We have purchased two stock HP ML350e Gen8 servers. Ubuntu 12.04 is to be installed in those. The stock, base unit have provision for 6 hard disks for creating RAID in which, one is filled with 500GB Hard disk and others with blank plates. We are planning to buy hard-disks and hard-disk bays for creating RAID array. I am confused with software RAID, hardware RAID and RAID levels. Kindly help me. Which RAID level is commonly used in IT firms and is most reliable. Kindly guide me with what all steps are there for creating RAID in HP Proliant ML350e Gen8 server? I am new to RAID configuration. Kindly answer in detail. Answer Ubuntu is not a good match for the HP ProLiant ML350e server hardware. Ubuntu is not supported for use with the hardware RAID controller on that system. Please see the following question for more detail: Installing Ubuntu 12.04 on HP Proliant DL380e with 1TB SAS Drive Ubuntu is certified for the ProLiant ML350e only if the Dynamic Smart Array co

nginx load balancer rewrite to listen port

I have implemented load balancer using nginx as below: upstream lb_units { server 127.0.0.1:88 weight=7 max_fails=3 fail_timeout=30s; # Reverse proxy to BES1 server 10.200.200.107 weight=1 max_fails=3 fail_timeout=30s; # Reverse proxy to BES2 server 10.200.200.94 weight=1 max_fails=3 fail_timeout=30s; # Reverse proxy to BES2 } server { listen 80; server_name mysite.com; location / { proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $host; proxy_pass http://lb_units; proxy_redirect off; } } #########FINALLY THE REAL SITE############## server { listen 88 backlog=128 default_server; rewrite ^/en/businesses$ /en/business permanent; location / { proxy_redirect off; } now whenever i try to browse /en/businesses page, it redirects me to port 88 (ie.: http://mysite.com:88/en/business ) How can i force nginx to keep on port 80 when it run

storage - How exactly does a SAS SFF-8087 breakout cable work? + RAID/connection questions

Please let me know if my question does not makes any sense as I am not sure if I am interpreting it correctly from my thoughts due to my lack of technical knowledge on this. If I am using a motherboard which has a connection for a SFF-8087 to 4x cable such as this SFF-8087 to 4x SATA connection. I am still learning about SAS but was told to build a system from a potential employer utilizing these connections. However, I am just not sure I understand the concept on how the system will treat the SATA connections which are going into the SAS port via this cable. Also what would be the advantage of doing it this way as opposed to just connecting the SATA drives directly into the SATA motherboard ports? I believe the built-in SAS connection may be an integrated RAID controller. Although, yes I can just go ahead and connect all the cables that fit I would like to have a better grasp of what I am doing such as: If a motherboard has SAS connections, should I automatically assume it has some t

database - Third-party SSD solutions in ProLiant Gen8 servers

I was wondering if anyone had any specific experience using Intel DC3700 SSDs (or similar) in the HP (DL380p) Gen8 servers? I'm upgrading a set of database servers that use direct-attached storage. Typically, we use HP-branded everything in our server configs, and beyond a few SSD'd desktops (all of which have worked flawlessly), I have not otherwise used SSDs - certainly not in a server. The servers we're upgrading run SQL Server (2005) on Windows. We're moving to SQL 2012. Current boxes host a single 200GB database on DL370 G6 provisioned with 72GB 15K SFF drives in RAID 1+0 as follows: OS (2 spindles), tempdb (4 spindles), t-logs (8 spindles), data (20 spindles). Performance is not an issue (CPU load is typically 20% / peak 30%, disk queues are typ = 1). The data volume disks are running in MSA50s off a P800 - so there's probably 5K IOPS there tops. The hardware is approaching 4 years old, and so it's time for a refresh. Data usage, as reported by the individ

ubuntu - Run a cronjob every day except the first day of the month

I'm trying to: run job A the first day of the month: 0 0 1 * * run job B the other days of the month: 0 0 2-31 * ? Vixie cron on Ubuntu 14.02 LTS refuses the second syntax, though it seems valid according to Wikipedia and official specs: "crontab", The Open Group Base Specifications Issue 7 — IEEE Std 1003.1, 2013 Edition, The Open Group, 2013, retrieved May 18, 2015 According to references above, the syntax 0 0 2-31 * * would run the job every day of the month as third and fifth fields are treated as OR clauses of the run condition. Answer You should be using a * , not a ? (which is invalid). The Wikipedia page notes that the ? is a nonstandard extension used only by nnCron , which you aren't using. In any case, if the day of week is set to * and the day of month is specified, then the day of week is ignored. The IEEE 1003.1 spec you reference actually states this , explaining how these fields interact: If either the month or day of month is

centos - /dev/mapper/VolGroup-lv_root has no more space left?

We have a centos machine and the mysql is not starting and is due to space if full. Below is the df-h results. What best can be done in this situation? Add more hard disk ? Filesystem Size Used Avail Use% Mounted on /dev/mapper/VolGroup-lv_root 47G 45G 0 100% / tmpfs 3.9G 0 3.9G 0% /dev/shm /dev/xvda1 485M 33M 428M 7% /boot /dev/mapper/VolGroup-lv_home 44G 180M 42G 1% /home

linux - oom-killer killing processes despite having plenty of free swap

This machine has a ton of swap, yet processes still occasionally get killed by the oom-killer. Can anyone explain this behavior, and more importantly how to keep it from occurring? Dmesg output: python invoked oom-killer: gfp_mask=0x1200d2, order=0, oomkilladj=4 Pid: 13996, comm: python Not tainted 2.6.27-gentoo-r8cluster-e1000 #9 Call Trace: [ ] oom_kill_process+0x57/0x1dc [ ] getnstimeofday+0x53/0xb3 [ ] badness+0x16a/0x1a9 [ ] out_of_memory+0x1f2/0x25c [ ] __alloc_pages_internal+0x30f/0x3b2 [ ] read_swap_cache_async+0x48/0xc0 [ ] swapin_readahead+0x57/0x98 [ ] handle_mm_fault+0x408/0x706 [ ] do_page_fault+0x42c/0x7e7 [ ] error_exit+0x0/0x51 Mem-Info: Node 0 DMA per-cpu: CPU 0: hi: 0, btch: 1 usd: 0 CPU 1: hi: 0, btch: 1 usd: 0 CPU 2: hi: 0, btch: 1 usd: 0 CPU 3: hi: 0, btch: 1 usd: 0 Node 0 DMA32 per-cpu: CPU 0: hi: 186, btch: 31 usd: 103 CPU 1: hi: 186, btch: 31 usd: 48 CPU 2: hi: 186, btch: 31 usd: 136 CPU 3: hi:

mod rewrite - Redirect, Change URLs or Redirect HTTP to HTTPS in Apache - Everything You Ever Wanted to Know About Mod_Rewrite Rules but Were Afraid to Ask

This is a Canonical Question about Apache's mod_rewrite. Changing a request URL or redirecting users to a different URL than the one they originally requested is done using mod_rewrite. This includes such things as: Changing HTTP to HTTPS (or the other way around) Changing a request to a page which no longer exist to a new replacement. Modifying a URL format (such as ?id=3433 to /id/3433 ) Presenting a different page based on the browser, based on the referrer, based on anything possible under the moon and sun. Anything you want to mess around with URL Everything You Ever Wanted to Know about Mod_Rewrite Rules but Were Afraid to Ask! How can I become an expert at writing mod_rewrite rules? What is the fundamental format and structure of mod_rewrite rules? What form/flavor of regular expressions do I need to have a solid grasp of? What are the most common mistakes/pitfalls when writing rewrite rules? What is a good method for testing and verifying mod_rewrite rules? Are there SEO

linux - Apache Server files in /var/www/

All right... I have my server set up and I have 4 sites residing in /var/www/. Each site directory and the files underneath it are all root:www-pub according to this post: What's the best way of handling permissions for Apache 2's user www-data in /var/www? My user, cdog, is part of the www-pub, as directed by the above post and, after more research, I believe umask is set up properly. Issues 1: Creating new files inside any of the /var/www/ directories gives me permissions cdog:www-pub -rw-r--r-- all other files are root:www-pub -rw-rw-r-- I was led to believe (according to above post) that any new files created would be the later. Issue 2 Most of these directories, with permissions of drwxrwsr-x are Joomla directories. Logging into the Joomla back end gives me a whole bunch of unwritable directories, which isn't good for updating/installing extensions/plugins, etc. First, why aren't my files being created with the correct permissions? Second, why are the Joomla dire

windows server 2008 - Suddenly Gmail and Hotmail block our emails

Starting from yesterday, all emails sent from our server are being rejected by Gmail and Hotmail. Everything was working properly until now. Here are the error messages received: Gmail Failed Recipient: email@gmail.com Reason: Remote host said: 550 5.7.1 [server IP 1] Our system has detected an unusual rate of 5.7.1 unsolicited mail originating from your IP address. To protect our 5.7.1 users from spam, mail sent from your IP address has been blocked. 5.7.1 Please visit http://www.google.com/mail/help/bulk_mail.html to review 5.7.1 our Bulk Email Senders Guidelines. el7si1434447wib.69 - gsmtp Hotmail Failed Recipient:address@hotmail.com Reason: Remote host said: 421 RP-001 (BAY0-MC3-F30) Unfortunately, some messages from [server IP] weren't sent. Please try again. We have limits for how many messages can be sent per hour and per day. You can also refer to http://mail.live.com/mail/troubleshooting.aspx#errors. I do know the point of those messages - they both blocked our se

linux - df -h shows 100% usage but sizes don't add up

I have a situation where my root filesystem is supposed to have plenty of free space, but Debian behaves as if it had no free space left. Non-root users even refute to write anything complaining about the lack of free space. I.e. for example: ~$ echo "qwertyu" > test -bash: echo: write error: Spazio esaurito sul device (Sorry about the language, I didn't install the server myself. The error reads "ran out of free space on the device"). But root writes to the same directory without complaints. Also if I do df -h as root I get this: /# df -h File system Dim. Usati Dispon. Uso% Montato su rootfs 48G 46G 0 100% / udev 10M 0 10M 0% /dev tmpfs 397M 88M 310M 23% /run /dev/disk/by-uuid/8063903c-80ad-4f72-81b0-cd67dbd48fc7 48G 46G 0 100% / tm

split dns - How can I access a server on local network using its public name?

I have a problem in which I cannot access a server using its public name from the same network as the server. Access to the server works fine from the internet. Reading up on this problem, I've ran across such things as hairpin NAT, loopback NAT, split DNS, editing hosts files, etc. My network has a SMC 8013WG-CCR (Comcast) cable modem connected to a Linksys WRT54G2. As I understand, the router is supposed to handle this sort of loopback (by leaving the security option "Filter Internet NAT Redirection" disabled), but the connection between these two devices is a secondary LAN, so I don't think the router "knows" what the proper public IP address is. The easiest solution would be to edit the hosts files of all the computers in the network, but many are notebook computers which will need to access the server both on the LAN and externally. The server is Windows Server 2012, so I could set it up as the internal DNS server but I don't have enough experience

domain name system - Why is geo-redundant DNS necessary for small sites?

This is a Canonical Question about DNS geo-redundancy. It's extremely common knowledge that geo-redundant DNS servers located at separate physical locations are highly desirable when providing resilient web services. This is covered in-depth by document BCP 16 , but some of the most frequently mentioned reasons include: Protection against datacenter disasters. Earthquakes happen. Fires happen in racks and take out nearby servers and network equipment. Multiple DNS servers won't do you much good if physical problems at the datacenter knock out both DNS servers at once, even if they're not in the same row. Protection against upstream peer problems. Multiple DNS servers won't prevent problems if a shared upstream network peer takes a dirt nap. Whether the upstream problem completely takes you offline, or simply isolates all of your DNS servers from a fraction of your userbase, the end result is that people can't access your domain even if the services themselves a

linux - How to provide external access to only the Git projects (clone/pull/push) of an internal GitLab deployment

We have set-up a GitLab server (GitLab 7.0 Community Edition). It is up and running and our colleagues can use it within the LAN (the IP address and Host are only visible from the LAN). Some of the projects hosted on this GitLab instance should be "shared" with external users (not part of our company). We would like to let them access the Git repositories in order to be able to clone, pull and push. The GitLab server will stay within the LAN. But we can setup a server in our DMZ which could reverse-proxy (or some other alternatives) the GitLab server. We would like however that only the ".git" URLs are accessible via HTTPS (so not give access to the GitLab WUI (Web User interface)). How can we set-up the "reverse-proxy" in the DMZ to provide access for external users (on the internet) to our internal Git repositories hosted on GitLab? Wishes: Only https://*/*.git/* URLs should be allowed externally; HTTP basic authentication on the reverse proxy would be

How to create a CNAME for a domain's root name

I'd like to set a domain's root name to a CNAME instead of the usual A record. Here's a perfect example of what I'm trying to do: dig lrnskls.com Notice the answer section: ;; ANSWER SECTION: lrnskls.com. 300 IN CNAME partner.adjix.com. partner.adjix.com. 300 IN A 67.121.212.61 The reason I'm trying to do this is so I can point a domain's root name, via a CNAME alias, to Amazon's S3. Using an A record doesn't work because S3's IPs change every few minutes for load balancing purposes. PS - This seems to be legal under section 3.6.2 of RFC 1034 (note the USC-ISIC.ARPA example): http://www.faqs.org/rfcs/rfc1034.html

ip - How are web hosting companies able to give everyone public IPv4 addresses

I've a few web applications on AWS/Linode/Digital Ocean/Hostgator's VPS instances and each has an IPv4 address. As I understand it, APNIC and ARIN distribute IP addresses and new hosting companies have to buy unused IPs from ISPs going out of business. Given how big AWS, DigitalOcean etc. are now, and the ever growing number of websites and apps: How do these companies continue to fulfil the need for IP space? How're they able to give me public IPv4 Addresses for VPS instances? How does an agency decide what sized pool of IPs can be given to which company. They're all profit-making companies at the end who are there for "land grab". The early movers seemingly have a big advantage in this — How does one make the field level-playing for new entrants?

302/301 and 404 redirect issue for apache redirection for tomcat

I'm using Apache HTTPD in front of Apache Tomcat with the following virtual host: $ cat /etc/apache2/sites-enabled/onlinetaskboarddotcom ServerAdmin comented@out.com ServerName www.onlinetaskboard.com ServerAlias onlinetaskboard.com DocumentRoot /home/ubuntu/www/apache/onlinetaskboarddotcom ProxyPass / http://www.onlinetaskboard.com:8080/ ProxyPassReverse / http://www.onlinetaskboard.com:8080/ Options FollowSymLinks AllowOverride None Options Indexes FollowSymLinks MultiViews AllowOverride None Order allow,deny allow from all ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/ AllowOverride None Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch Order allow,deny Allow from all ErrorLog ${APACHE_LOG_DIR}/error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog ${APACHE_LOG_DIR}/access.

domain name system - Using a CNAME record to redirect a subdomain to another website?

Under my current understanding, I am able to use CNAME records to redirect users to another domain. For example, I own 'mydomain.com'. When a user goes to 'keep.mydomain.com', they should be redirected to 'keep.google.com'. I currently have the following record in my DNS record set: Name: keep.mydomain.com Type: CNAME Value: https://keep.google.com/ TTL: 300ms I don't have any other records under that subdomain, however I do have MX, A, NS, SOA, TXT, and other CNAME records under other domain/subdomains, which work properly. However, when I go to 'keep.mydomain.com', I get the error: This site can’t be reached keep.mydomain.ca’s server DNS address could not be found. ERR_NAME_NOT_RESOLVED Am I misunderstanding the use of CNAME, or is there something I have configured that is conflicting? Answer CNAMES are not redirections, they are aliases. CNAME also includes all other resource records such as A,MX,TXT. so if you query for an A record,

domain name system - Regarding gmail SPF record and A record

I have a domain with the following SPF record, "v=spf1 +a +mx +ip4:123.45.67.89 ~all" Two questions, Is the IP necessary there? The A record on the domain resolves to the same IP i.e. 123.45.67.89. I've created an email on the domain and added it to gmail to send and receive emails. The emails are working fine, I am able to send emails and they don't have the warning "Google cannot verify if the domain actually sent the email or no". Do I need to add any gmail SPF record to it? I'm asking about this v=spf1 include:_spf.google.com record. Answer If you have exactly the same IP (or a: / ) in your a mechanism (or mx mechanism ), the ip4 mechanism is unnecessary and CAN (rather than must) be removed. As domain is not specified in your +a & +mx , the current domain is used, while ip4 & ip6 must always have an or / specified. With the current SPF record, Google falls within ~all , causing SoftFail , i.e. "The SPF record

linux - memory leak? RHEL 5.5. RSS show ok, almost no free memory left, swap used heavily

i encounter a very instresting problem, and it seems that some physical may disapper quietly. i am very puzzled, so if anyone could give some help, I would be very appreciated. here is my top show: sort by memory usage Cpu(s): 0.8%us, 1.0%sy, 0.0%ni, 81.1%id, 14.2%wa, 0.0%hi, 2.9%si, 0.0%st Mem: 4041160k total, 3947524k used, 93636k free, 736k buffers Swap: 4096536k total, 2064148k used, 2032388k free, 41348k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 15168 root 20 0 3127m 290m 1908 S 108.2 7.4 43376:10 STServer-1 18303 root 20 0 99.7m 12m 912 S 0.0 0.3 0:00.86 sshd 7129 root 20 0 17160 7800 520 S 0.5 0.2 5:37.52 thttpd 2583 root 10 -10 4536 2488 1672 S 0.0 0.1 1:19.33 iscsid 4360 root 20 0 15660 2308 464 S 0.0 0.1 15:42.71 lbtcpd.out 4361 root 20 0 186m 1976 964 S 0.5 0.0 82:00.36 lbsvr.out 3932 root 20 0 100m 1948 836 S 0.0 0.0 30:31.38 snmpd 186

windows server 2003 - Active Directory: Accessing network share as WinXP SYSTEM user

The Problem: I cannot access a Domain Computers -accesible network share by a process running on Windows XP as a SYSTEM account. I've got a 2008R2 domain, a 2003R2 file server and a set of clients. On the file server, I've made a share accessible by the Domain Computers group. The share is supposed to hold a read-only repository of files ( Wpkg ) to be accessed from domain workstations by a system service ( Wpkg-GP ) run by SYSTEM user. Now the problem is that, while it works perfectly on Windows 7, it completely fails on Windows XP . It seems, that for some reason the SYSTEM account on Windows XP cannot authenticate as a computer. See more in details. The Question: What do I do wrong? Is it perhaps a natural behaviour of Windows XP system? If so, can it be changed or fixed? Or is there any other way to achieve a scenario, in which a network share would be visible to the SYSTEM user on computers, without being accessible by the users? The Details: The Domain Computers gro

ubuntu - LXC networking with public IP

I have installed LXC in ubuntu server 12.04 using this link . It was installed successfully and i am able login using ubutu/ubuntu as username and password. Then I tried to setup network for LXC container. I changed in /etc/network/interface as auto lo iface lo inet loopback auto eth0 iface eth0 inet static address 125.67.43.100 netmask 255.255.255.0 broadcast 178.33.40.255 gateway 125.67.43.1 Content of /var/lib/lxc/mycontainer/config is: lxc.utsname = mycontainer lxc.mount = /var/lib/lxc/test/fstab lxc.rootfs = /var/lib/lxc/test/rootfs lxc.network.type = veth lxc.network.flags = up lxc.network.link = br0 lxc.network.name = eth0 lxc.network.veth.pair = vethmycontainer lxc.network.ipv4 = 125.67.43.102 lxc.network.hwaddr= 02:00:00:86:5b:11 lxc.devttydir = lxc lxc.tty = 4 lxc.pts = 1024 lxc.arch = amd64 lxc.cap.drop = sys_module mac_admin mac_override lxc.pivotdir = lxc_putold Containers etc/network/interface auto lo iface lo inet loopback auto eth0 iface

PHP as CGI or Apache Module?

I've always believed that PHP works better installed as an Apache module, but recently, someone on a local forum pointed out that running PHP as CGI is better security-wise. I've done some googling and it appears that Dreamhost defaults its PHP installation to working via CGI . Now I'm a little puzzled. As far as I understand (I'm no sysadmin, just a web developer), there's the problem of user permissions when PHP is installed as an Apache module. And there's the problem of speed when using PHP via CGI (or it was). What's the recommended way nowadays for installing PHP? On both shared and dedicated hosting. Answer Running PHP as a module is usually more efficient, but means all scripts run under the same user account (what-ever account Apache runs as) which can pose security concerns in a shared environment. CGI is much slower as it starts a new PHP processes for every request that needs one, but can be configured to run each script as the user

How to calculate the MaxClient value in apache?

I want to set an optimum value for MaxClient in apache for my production server. What are the parameters that I should consider while calculating this value ? Answer Refer to the Apache Performance Tuning guide. Quote "You can, and should, control the MaxClients setting so that your server does not spawn so many children it starts swapping. This procedure for doing this is simple: determine the size of your average Apache process, by looking at your process list via a tool such as top, and divide this into your total available memory, leaving some room for other processes."

PHP Server doesn't Receives Mails from Windows SMTP

I've got a strange problem I setup my SMTP server on windows server for my PHP IIS and I've been sending mails from there but while my gmail account receives this mail my PHP hosting server doesn't receives these mails. I thought it might be blocked or spammed on my PHP server but it seems clear. Any ideas anyone? SOLVED IT with adding my domain's name on sent_from line in PHP.ini file here is my PHP code: if (isset($_POST['submit'])) { $msg .= "Time: " . date("m/d/y g:ia", time()) . "\n"; $msg .= "Company Name: " . $compname . "\n"; $msg .= "Booking Name: " . $opp['opname'] ."\n" ; $msg .= "Record Manager: " . $opp['recmanager'] ."\n" ; $msg .= "Create Date: " . substr($opp['createdate'],0,-7) ."\n" ; foreach ($_POST as $field=>$value) { if ($field != "submit") $msg .= $field . ": " . $value . "\n"

linux - amazon ec2 instance gets timed out

I build amazon ec2 instance as following steps given on http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/install-LAMP.html Also followed this: http://imperialwicket.com/aws-building-a-lamp-instance service httpd restart says: [root@ip-21-31-3-19 ec2-user]# service httpd restart Stopping httpd: [ OK ] Starting httpd: [ OK ] there is index.php file in /var/www/html/index.php drive but it gives error: The connection has timed out What can be the issue? UPDATE My instance was created by my friend from US. I am in India. He sent me mail he received after creating the instance. Availability zone - us-west-2c Security groups - launch-wizard-1. view rules Scheduled events No scheduled events AMI ID - amzn-ami-pv-2013.09.2.x86_64-ebs (ami-cc293fc) Subnet ID - subnet-4f727427 Platform - Network interfaces - eth0 Key pair name - AmazonLinux01 I think security group is already created. I am hand

cache - How effective is LSI CacheCade SSD storage tiering?

LSI offers their CacheCade storage tiering technology, which allows SSD devices to be used as read and write caches to augment traditional RAID arrays. Other vendors have adopted similar technologies; HP SmartArray controllers have their SmartCache . Adaptec has MaxCache ... Not to mention a number of software-based acceleration tools ( sTec EnhanceIO , Velobit , FusionIO ioTurbine , Intel CAS , Facebook flashcache ?) . Coming from a ZFS background, I make use of different types of SSDs to handle read caching (L2ARC) and write caching (ZIL) duties. Different traits are needed for their respective workloads; Low-latency and endurance for write caching. High capacity for read. Since CacheCade SSDs can be used for write and read cache, what purpose does the RAID controller's onboard NVRAM play? When used as a write cache, what danger is there to the CacheCade SSDs in terms of write endurance? Using consumer SSDs seems to be encouraged. Do writes go straight to SSD or do they hit

apache 2.2 - Can't ping self

I have a wireless internet connection setup on my Mac. (v10.5.6) Am connected to the internet and everything is running smoothly. I recently discovered a quirky behaviour while setting up apache web server. When i typed in my dynamic ip ( http://117.254.149.11/ ) in the webbrowser to visit my site pages it just timed out. In terminal i tried pinging localhost and it worked. $ ping localhost PING localhost (127.0.0.1): 56 data bytes 64 bytes from 127.0.0.1: icmp_seq=0 ttl=64 time=0.063 ms 64 bytes from 127.0.0.1: icmp_seq=1 ttl=64 time=0.056 ms 64 bytes from 127.0.0.1: icmp_seq=2 ttl=64 time=0.044 ms But if i pinged my ip it would just time out. $ ping 117.254.149.11 PING 117.254.149.11 (117.254.149.11): 56 data bytes ^C --- 117.254.149.11 ping statistics --- 10 packets transmitted, 0 packets received, 100% packet loss Pinging any other site works though. I am completely stumped. Any help would be greatly appreciated. Answer Make sure th

Hardening CentOS 6 web server security

There is a web server installed as a virtual machine in the cloud. I've configured it myself, and it has two corporate web sites on it. I'am not planning anything else to be there. So before it goes to public I want to check it for security vulnerabilities. Pleas guide me to best practices, checklists and audit procedures, anything to measure and evaluate security of the server. Answer From a network security point of view you can do the following: make sure that the server is fully patched. I cannot stress that enough. make sure that you do not expose any unnecessary services to the outside world, hopefully only port 80 will be exposed. run a vulnerability scan, such as Nessus or OpenVAS against the server. There are a lot of security tools that can help you with vulnerability scanning. Review each finding and apply countermeasures if necessary Disable what you do not actually need, reducing your attack surface. Enable SELinux if not already enabled The a

rewrite - RewriteCond always match incorrectly

If I write: echo "SCRIPT_NAME: ".@$_SERVER['SCRIPT_NAME']." "; ?> SCRIPT_NAME: /index.php The line above is showed. I'm using these rewrite lines RewriteCond %{SCRIPT_NAME} !^/index\.php$ RewriteRule .* http://example.com/404 [L] I have checked: http://example.com/foo http://example.com/bar http://example.com/hdhd The RewriteCond is matched then I'm being redirect to 404 [03/Aug/2013:13:07:48 +0200] [example.com/sid#23c3710][rid#263afe8/initial] (4) [perdir /var/www/vhosts/example.com/httpdocs/] RewriteCond: input='' pattern='!^/index\\.php$' => matched [03/Aug/2013:13:07:48 +0200] [example.com/sid#23c3710][rid#263afe8/initial] (2) [perdir /var/www/vhosts/example.com/httpdocs/] rewrite '404' -> 'http://example.com/404' [03/Aug/2013:13:07:48 +0200] [example.com/sid#23c3710][rid#263afe8/initial] (2) [perdir /var/www/vhosts/example.com/httpdocs/] implicitly forcing redirect (rc=302) with http://example.com/404 [0

virtualization - virtual san appliance in esxi? why?

I am looking at different SAN solutions (Starwind, Lefthand, OpenFiler, OpenSolaris, etc.) for my small office and I understand that there are many virtual SAN appliance (VSA) solutions which can be used with ESXi. Does anyone use it in a mission critical environment? Let's see if I got it right: In order for the SAN VM to "see" the harddisks, ESXi would have to "see" it first right? I'm assuming you would have to install the SAN VM on a local datastore, AND allocate space on a local datastore for the SAN VM, something like this: USB boot ESXi RAID card with local disk array A (For OS) RAID card with local disk array B (For SAN) Install SAN VM on local disk array A Use local disk array B as SAN datastore Share SAN datasore with other ESXi machines Are there advantages to this kind of setup? It seems a little overcomplicated to set it up this way, isn't it? How does this support HA? Answer Does anyone use it in a mission critical environme

linux - MySQL Tables crashing randomly

This is one of the many random Tables that get corrupted. Any ideas why and what would be causing this? How do I keep MySQL tables from crashing and MySQL from crashing? Repairing USR_wp537 USR_wp537.rev_commentmeta OK USR_wp537.rev_comments OK USR_wp537.rev_links OK USR_wp537.rev_options OK USR_wp537.rev_postmeta OK USR_wp537.rev_posts Error : Incorrect key file for table './USR_wp537/rev_posts'; try to repair it Error : Incorrect key file for table 'rev_posts'; try to repair it error : Corrupt USR_wp537.rev_term_relationships OK USR_wp537.rev_term_taxonomy OK USR_wp537.rev_terms OK USR_wp537.rev_usermeta OK USR_wp537.rev_users Eventually the only way to fix is to do a mysql> REPAIR TABLE USE_FRM; This is also mysql 5.5 top - 20:17:11 up 4 days, 8:57, 1