Skip to main content

Posts

Showing posts from September, 2014

heartbeat - Which technique should be chosen for IP failover with manual control

I have the following setup, Linux stack, with front-end running nginx proxy and static assets and back-end running Ruby on Rails and MySQL in master-master replication: Primary site: front-end.a , back-end.a Secondary site: front-end.b , back-end.b A router sitting on a shared network that can route to both primary and secondary sites The primary site serves requests most of the time. The secondary site is redundant. back-end.b is in master-master replication with back-end.a but is read-only. When the primary site goes down, requests need to be redirected to the secondary site. This will show a service unavailable 503 page until manual intervention ensures that the primary site won't come back and hits the big switch that makes the secondary site live and read-write. The primary site can then be brought back in a controlled fashion, with back-end.a becoming a read-only replication slave of back-end.b . When everything on the primary site is ready again, front-end.b will start s

apache 2.2 - How does ServerName and ServerAlias work?

It's the following part of a virtual host config that I need further clarification on: # Admin email, Server Name (domain name), and any aliases ServerAdmin example@example.com ServerName 141.29.495.999 ServerAlias example.com ... This is and example config, similar to what I currently have (I don't have a domain name at the moment). - Allow the following settings for all HTTP requests made on port 80 to IPs that this server can be contacted on. For instance, if the server could be accessed on more than one IP, you could restrict this directive to just one instead of both. ServerName - If the host part of the HTTP request matches this name, then allow the request. Normally this would be a domain name that maps to an IP, but in this case the HTTP request host must match this IP. ServerAlias - Alternate names accepted by the server. The confusing part for me is, in the above scenario, if I set ServerAlias mytestname.com and then made an HTTP request to mytestname.com

networking - How to route ipv6 to an openvz container?

My hosting provider, OVH, offers me a /64 ipv6 block. I'm pretty new to ipv6 and a bit of a newbie in networking in general. I was able to route an ipv6 on my proxmox server by properly configuring my /etc/network/interfaces and setting up an ipv6 address. I can ping to and from it another ipv6 server. Now, I'd like to assign addresses of the block to my openvz container. But I don't really know how to do that. I'm used to their vrack technology but it works differently because the ips are assigned the vrack itself and routed on the server. I already tried to set up a new ipv6 ip address in the proxmox interface but either I misconfigured the ipv6 or it is not the way to go. I'm pretty sure there is something to do with properly configuring a gateway and / or routes to it or something, but I don't know where to start. Any ideas?

Dell PERC 4/DC (PowerEdge 2850) and ZFS

I understand that ZFS prefers to have as much data about the drives as possible, and that the best thing to do is turn off RAID. The hardware environment is a Dell PowerEdge 2850 with PERC 4/DC and four drives (73Gb each) installed out of six possible. The software is FreeNAS 8.0.2 with ZFS booting from USB key. I've configured the RAID this way: each physical drive is a logical drive in a RAID 0. No special configurations were made beyond this. Is this optimal for ZFS? How do I properly set this up under FreeNAS as a ZRAID? Do I want to? In my reading it was said that one can't add a new disk to a ZRAID pool; is this still true? How would you go about adding two new disks in a redundant fashion to a zpool in FreeNAS? Answer No, this is not optimal for ZFS. This is outlined here on Server Fault at: ZFS SAS/SATA controller recommendations The PERC 4/DC controller is a basic PCI-X parallel SCSI RAID controller. ZFS prefers to handle whole-disk management, so the be

nginx - Disk I/O and load average peaks once every hour

We have updated our server from Debian Wheezy to Jessie and from php5.6 to php7.0, but now we have a disk I/O and load average peak exactly every hour. The exact time depends on the system start time. On this server, we have: nginx/1.10.1 PHP 7.0.8-1~dotdeb+8.1 Percona mysql server 5.6.30-76.3-log dovecot 2.2.devel postfix 2.11.3-1 java 1.7.0_101 We have tried to return to php5.6, disable cron, disable postfix and dovecot, stop our Java app, but nothing helped. The peaks look like the following: The iotop looks like the following: How can I know exactly what causes these peaks and eliminate them? Answer Maybe you have some MySQL scheduled events going on each hour? MySQL Events are tasks that run according to a schedule. Therefore, we sometimes refer to them as scheduled events. When you create an event, you are creating a named database object containing one or more SQL statements to be executed at one or more regular intervals, beginning and ending at a speci

nginx: redirecting to SSL based on the original request URL (not $server_name)

I have an nginx config that is similar to this: server { server_name my-english-site.com my-french-site.com; listen 0.0.0.0:80; rewrite ^ https://$server_name$request_uri? permanent; } server { listen 0.0.0.0:443 ssl; server_name my-english-site.com my-french-site.com; } When someone goes to http://my-french-site.com , it redirects to https://my-english-site.com , because the rewrite directive uses the first server in the $server_name directive. I tried replacing $server_name with $host , expecting it to use the value of the Host request header. But it still redirects to the English URL. How can I get non-HTTPS requests redirected to the corresponding HTTPS URLs? Thank you!

raid - mdadm raid6 recovery reads more from one drive?

I just replaced a fault drive in my raid6-array (consisting of 8 drives) during the recovery I did notice something strange in iostat. All drives are getting the same speeds (as expected) except for one drive (sdi) which is constantly reading faster than the rest. It is also reading about one eighth faster which might have something to do with that there are in total eight drives in the array, but I don't know why... This was true during the whole recovery (always the same drive reading faster than all of the rest) and looking at the total statistics for the recovery all drives have read/written pretty much the same amount except for sdi which have read one eighth more. Some iostat stats averaged for 100s: Device: tps MB_read/s MB_wrtn/s MB_read MB_wrtn sdb 444.80 26.15 0.00 2615 0 sdb1 444.80 26.15 0.00 2615 0 sdc 445.07 26.15 0.00 2615

ssl certificate for www.example.com and example.com

I used make-dummmy-cert that comes with apache 2.2 and ssl_mod to make a self-signed certificate. I tried making it for www.example.com, example.com, or *.example.com, but none of them would work for both www.example.com and example.com. The browser would say The certificate is only valid for example.com (or www.example.com or *.example.com respectively) How do I make a self-signed cert that would work for both cases?

linux - Advanced NAT - Mix PAT with NAT

I've just configured an Ubuntu Server as a router with NAT/PAT, and I'm trying to do the following thing: I work for a school that has divided its network into smaller networks connected by routers. I need to create an IP address visible on the internal network of one room that would forward all traffic to an external IP address. To explain, this is the configuration: NAT outside: eth1 - 192.168.1.254/24 NAT inside: br0 - 192.168.2.10, masquerading enabled I need to create an address like 192.168.2.1, also for br0, that would forward all its traffic to the ip 192.168.1.1, and would appear as if that IP is directly connected to the network, but would NOT have masquerading enabled on it. The basic ideea is that the router's address of br0 to be set to 192.168.2.10, and also br0 to have another address that does not masquerade, and forwards all traffic to 192.168.1.1, that being the address of the main router. The reason for this is that br0 is a bridge between the physical ne

Locating memory leak in Apache httpd process, PHP/Doctrine-based application

I have a PHP application using these components: Apache 2.2.3-31 on Centos 5.4 PHP 5.2.10 Xdebug 2.0.5 with Remote Debugging enabled APC 3.0.19 Doctrine ORM for PHP 1.2.1 using Query Caching and Results Caching via APC MySQL 5.0.77 using Query Caching I've noticed that when I start up Apache, I eventually end up 10 child processes. As time goes on, each process will grow in memory until each one approaches 10% of available memory, which begins to slow the server to a crawl since together they grow to take up 100% of memory. Here is a snapshot of my top output: PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 1471 apache 16 0 626m 201m 18m S 0.0 10.2 1:11.02 httpd 1470 apache 16 0 622m 198m 18m S 0.0 10.1 1:14.49 httpd

linux - PHP5 extensions

I have looked through many tutorials on installing a web server, and some of them have enormous amounts of various PHP extensions. I have a few questions about that: Why would one want to install all those extensions? How to know which extensions you have to install for your site to work properly? Why some tutorials "just" tell you to install them all, when some tell you to install 4 or 5 of them? Thanks! P.S. I'm quite new to Linux, and I'm installing a web server using nginx. Or looking for information about things that look odd to me at the moment. EDIT: Since the question has been answered, I would like to know which ones of these are most likely unnecessary for a Wordpress or SMF installation? php5-fpm php5-mysql php5-xsl php5-curl php5-gd php5-intl php-pear php5-imagick php5-imap php5-mcrypt php5-memcache php5-xcache php5-ming php5-ps php5-pspell php5-recode php5-snmp php5-sqlite php5-tidy php5-xmlrpc Perhaps there are some extensions that would optimize my webs

Nameservers for Open SRS - who is accountable for my domain?

I purchased the "release" of a domain from a domain provider who was also hosting the corresponding website. I was given an account to log-in to at opensrs.net. I pointed the domain to different nameservers to allow me to host it myself. Now, several months later, the client received a renewal domain notice from the original domain provider. They advised the client to ignore the notice because payment was good until 2014. Days later, the domain is suspended, the nameservers were changed to RENEWYOURNAME.NET and has now gone into redemption meaning it's unavailable for 40 days. The original domain provider insists that they are no longer responsible for the domain. Who is managing my domain? Who is payment due to? Who cancelled the account and changed the nameservers? WHOIS data suggest that it's http://www.tucowsdomains.com/ who is a reseller of http://www.opensrs.com/ . Tuscow Domains clearly identify the original domain provider as the current provider. Open SRS, t

linux - oracle lsnrctl TNS-12545: Connect failed because target host or object does not exist

I am trying to connect to my oracle database. I can't get the listener started. Here is what I have tried. $ lsnrctl start LSNRCTL for Linux: Version 10.2.0.4.0 - Production on 20-JAN-2012 08:19:58 Copyright (c) 1991, 2007, Oracle. All rights reserved. Starting /u01/app/oracle/product/10.2.0/db_1/bin/tnslsnr: please wait... TNSLSNR for Linux: Version 10.2.0.4.0 - Production Log messages written to /u01/app/oracle/product/10.2.0/db_1/network/log/listener.log Error listening on: (ADDRESS=(PROTOCOL=tcp)(HOST=)(PORT=1521)) TNS-12545: Connect failed because target host or object does not exist TNS-12560: TNS:protocol adapter error TNS-00515: Connect failed because target host or object does not exist $ cat ./admin/tnsnames.ora # TNSNAMES.ORA Network Configuration File: # /u01/app/oracle/product/10.2.0/db_1/network/admin/tnsnames.ora # EXTPROC_CONNECTION_DATA.test = (DESCRIPTION = (ADDRESS_LIST = (ADDRESS = (PROTOCOL = IPC)(KEY = EXTPROC)) ) (CONNECT_DATA =

web server - Problem with file owner/group

I have files on my website which I need access to files on my server and they also need to be able to be edited by the webserver. Now with my current setup I cant seem to do that. If the owner/group is imran:imran then I have full access to that file but my webserver cant seem to open/edit those file. Now I was told that I need to match the files owner/group with the ones the webserver uses, I had a look at files that the webserver created and they were nobody:nobody. So I changed my whole public_html owner/group to nobody:nobody because I simply had too many files in the folder which needed to be edited by the webserver and would take too long to change owner one by one. The webserver was able to edit it just fine after changing the owner but then I realized something.. now I cant even view the public_html folder. Does anyone know whats the correct way to set the owners so that I have access as well as the webserver? (This is on a WHM/cPanel powered server)

raid - Adding foreign disk to Dell Poweredge 2950

I have 4 300G disks in a RAID5 configuration on a Dell Poweredge 2950 with PERC 5/i. I'm trying to replace one of the disks (it failed) with a disk from another identical server. I got a warning that the disk was foreign, but I cleared the foreign config. Now, however, I can't actually add the new disk to the RAID. It's visible under PDs, but under the VD menu the disk is listed as missing. How do I get the VD to recognize and add the new disk? Answer I think you will find that this cannot be done from the RAID utility and must be performed using Dell OpenManage. If you dont want to install this you can download a LiveCD and boot to it to perform this operation. http://linux.dell.com/files/openmanage-contributions/

linux - HPET missing from available clocksources on CentOS

I am having trouble using HPET on my physical machine. It is not available, even though I have enabled it in my bios, forced it in grub, and triple checked my kernel to include HPET in its compilation. Motherboard: Supermicro X9DRW Processor: 2x Intel(R) Xeon(R) CPU E5-2640 SAS Controller: LSI Logic / Symbios Logic SAS2004 PCI-Express Fusion-MPT SAS-2 [Spitfire] (rev 03) Distro: CentOS 6.3 Kernel: 3.4.21-rt32 #2 SMP PREEMPT RT x86_64 GNU/Linux Grub: hpet=force clocksource=hpet .config file: CONFIG_HPET_TIMER=y CONFIG_HPET_EMULATE_RTC=y CONFIG_HPET=y dmesg | grep hpet: Command line: ro root=/dev/mapper/vg_xxxx-lv_root rd_NO_LUKS rd_LVM_LV=vg_xxxx/lv_root KEYBOARDTYPE=pc KEYTABLE=us rd_NO_MD SYSFONT=latarcyrheb-sun16 crashkernel=auto rd_LVM_LV=vg_xxxx/lv_swap rd_NO_DM LANG=en_US.UTF-8 rhgb quiet panic=5 hpet=force clocksource=hpet Kernel command line: ro root=/dev/mapper/vg_xxxx-lv_root rd_NO_LUKS rd_LVM_LV=vg_xxxx/lv_root KEYBOARDTYPE=pc KEYTABLE=us rd_NO_MD SYSFONT=latarcyrheb-sun16

linux - PHP script is not running via cron

Although I checked the canonical issue for cron, I couldn't solve the problem. Why is my crontab not working, and how can I troubleshoot it? So, here I go: I have a php script that queries a MySQL and IBM Informix database (located in other host), generates json files, handles the information and inserts it into the MySQL database. The script has a main file and another that has the query handling functions. Staying like this: /opt/project script.php functions.php The script.php requires the functions.php file, and gene I can run the script smoothly using absolute or relative path. Inside /opt/project: # php script.php Somewhere else: # /usr/bin/php /opt/project/scrpt.php However, when its executed by cron job, doesn't work. I did already set up informing environment variables, performed log tests, and even created a shell script to run script.php with cron running the shell script. Server PATH (CentOS 7): /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/opt/IBM/in

lamp - How much VPS ram would I need to run Wordpress, Apache, SVN & MySQL?

Does anyone have a ballpark figure of how much VPS ram (without burstable) I would need to have apache with wordpress and subversion as well as the MySQL instance? Apache would host a couple of sites and SSL. MySQL would have just the Wordpress database. These sites are low traffic, less than 1k hits a day. Answer Take in mind that each apache worker will consume about 20-25MB, so if your 1k hits would be equally spaced in time in 8 hours per day you can think about having to serve only 0.03 requests per second. Assuming you have all you traffic concentrated in only 1 hour in the day (it isn't of course), you should have to serve about 0.28 requests per second. An other issue is how much memory got you DB, it is simple to know however, and it is quite a fixed cost. In the worst case you will have to transfer the entire DB (oh my god!, refactor you SQL in this case! :) ) .. so double the previous number.. The short answer is (IMHO) 128MB will suffice, abundant

debian - MySQL consuming nearly all memory in cache?

I am running a server with 24G of RAM. The only thing running here is a MySQL server configured as Master. When I check the memory via free -m I am getting this: total used free shared buffers cached Mem: 24158 24027 131 0 131 23050 Approx. 23G is in cache. Which seems a lot to me. I also set caching to 0 in the my.cnf query_cache_size = 0 How can I check what is exactly cached? Also a restart of mysql doesn't clear the cache. A flush tables also didn't help. Answer You're looking at two different caches. free -m tells you how much memory the operating system is using for the disk cache, not how much MySQL is using for the database cache. And the operating system should be using as much memory as it can for the disk cache -- why wouldn't you want as big a cache as possible? That memory is always available to be used if an application needs it. See here for a goo

Replacing a file server, will links break?

I'd like to replace a file server we use (actually a couple of them) because it's a 2003 server. I'd prefer to have 2008r2, because I can expand disk volumes on the fly. My question is, can I take down server 1 (2003) and replace it with server 2 (2008r2) using the exact same computer name and having the exact same shares mapped (if I move the disk with it in a P2V conversion)? Will this break everybody's links the have mapped to the shares? I'd rather not do it if it's going to break links. Is there a better way to do this that I'm not thinking of? Thanks. Answer Here are the assumptions I am making for the answer I will provide at the end. Assumptions: Your Windows 2003 Server is a member of an Active Directory Domain Your new Windows 2008 R2 Server will be a member of the same Active Directory Domain You will be doing the "swap" during off hours when you can ensure no one is connecting to the servers You will assign the exact s

linux - Huge directory, not files inside, but directory itself

I have been trying to delete a directory from a centos server using rm -Rf /root/FFDC for the past 15 hours and am having great difficulty. I can't do a directory listing because it hangs the system (too many files?) but what I can see is that the directory size is not the usual 4096 bytes but 488MB! [root@IS-11034 ~]# ls -al total 11760008 drwxr-x--- 31 root root 4096 Aug 10 18:28 . drwxr-xr-x 25 root root 4096 Aug 10 16:50 .. drwxr-xr-x 2 root root 488701952 Aug 11 12:20 FFDC I've checked the inodes and everything seems fine. I've checked top and rm is still using the cpu after 15 hours at 0.7%. Filesystem type is ext3. I'm clueless where to go now apart from backup and format. Answer Is even ls -1f /root/FFDC slow? With -1f the output won't be sorted and the file details will be left out. If the ls above runs fast, perhaps something like find /root/FFDC | xargs rm -vf would be faster? A normal rm -rf might do all kind of recurs

linux - RH 7.2: Memory usage in ps and top don't match; problem or normal behavior?

When I use the "top" command on my Red Hat 7.2 machine, it tells me that ~3.9 out of 4.0 gigs of RAM are in use, and that there's about 135meg free. When I use the "ps" command, however, to list all processes and their memory utilization, the list only adds up to about 650megs. Is this expected behavior, or is there something going on that should be cause for concern? I read that Linux will use free RAM to cache frequently used files from the disk, could that account for the "missing" RAM utilization? Thanks! IVR Avenger Answer Your guess is most likely correct. The "free memory" number given by top does not include what is used for the filesystem cache or buffers. The memory allocated to filesystem cache is free in that if an process needed some of it, it could easily be made available, but top will not show you this. free -m will give you a better idea of how much memory your processes are actually using (in MB), on the "

website - If I register a domain name as a name server what do I set it's name server to?

Do I add my ISP's name servers for the domain or itself as the name server? e.g. say I have the domain www.shoes.com and a server setup for DNS ns1.shoes.com will the name server for shoes.com be ns1.shoes.com or my ISPs name servers? EDIT: Just to clarify my DNS server is configured with the ISPs name servers but from domain registrar what do I put as the name server for the domain. Example: Say I have 3 domain names: www.shoes.com www.hats.com www.shirts.com I have a web server web.shoes.com and a name server ns1.shoes.com www.hats.com will have the name server ns1.shoes.com www.shirts.com will have the name server ns1.shoes.com www.shoes.com I am unsure about. Can it be it's own name server?

security - Unsecured MySQL 'root'@'localhost' account accessed remotely?

A little background: We've just had our PBX system hacked. The server itself seems secure (no logged unauthorised console access - SSH etc), but somehow the hackers have managed to inject a new admin user into the PBX software (FreePBX, backed by MySQL). Apache logs imply that the hackers managed to add the user without using the web interface (or any exploit in the web interface). Now, I have since discovered that MySQL was running without a root password (!!) and openly bound to the external IP address (Obviously, I have locked this down now). However, the only root level user in MySQL was 'root'@'localhost' and 'root'@'127.0.0.1' , both of which should only have been accessible locally. So, my question is this: Is there a way of spoofing a connection to MySQL so that it will allow connection to the 'root'@'localhost' user from a remote IP address, WITHOUT running any other exploit locally? For reference, the box is Centos 5 (Linu

How to connect, through a wireless router, to a subnet, and have the clients join AD domain?

First I'd describe my subnet A: a AD controller, say, adcontroller.mydomain with IP 192.168.1.3 a DHCP and DNS server, say, dhcpdns.mydomain with IP 192.168.1.10 Now I have a CISCO WRVS4400N wireless router, plug it through WAN port to the switch of subnet A. The router has IP 192.168.1.7 by DHCP of subnet A. Now I also have several PC/laptops connected to WRVS4400N, and do the following configurations, but I cannot get DHCP IP for these PC/laptops from dhcpdns.mydomain. disable DHCP in the router configure the router to have static IP in subnet A, say, 192.168.1.7 as before by DHCP. leave the gateway as empty, fill the dns to be 192.168.1.10 (dhcpdns.mydomain). So, my questions are: how to get DHCP IP for these PC/laptops from subnet A, i.e., dhcpdns.mydomain? how to join these PC/laptops to the AD domain? I see an option in WRVS4400N configuration page, that is domain name. does this mean the router can join the domain also? how to do the authentication? Many thanks! Answer

untagged - How to enter the field with 0 experience and a A+ cert?

To explain my situation, right now I am a high school senior with an A+ certification, and am aiming for the Net+ and (maybe) Security+ before the lifetime certification period ends in January, as well as (maybe) a vendor cert. I also have experience in web development, PHP, and Java, and am aiming for C# I aimed for the certifications because 1 teacher (who is a Sys Admin turned teacher, not like most who have 0 field experience) pushed the class to do it. However when talking with the Sys admin of another school, he said that the A+ is almost worthless, while the Net+ and the Security+ are only marginally better. He says experience is the only real weight. Now the issue is if my path is correct. You can't expect a 17 year old to have 10 years of experience, but how do I even get a resume looked at? With certifications. Or so I thought. After getting these certs, I was thinking of going to collage for a class like MIS (Information Systems) and minoring in CS. But

Warm Standby with Windows Server 2003

We have a primary file server for our Windows Server 2003 (Standard Edition) domain. We're running active directory in a plain vanilla type environment. When we recently migrated from an old server to a new one, I set up a DNS entry as Files.OurDomain.local as an Alias (CNAME) to the real server name. I disabled Strict Name Checking (see MSDN link text ). All client machines now refer to \\Files or \\Files.OurDomain.local to get to their files. No one ever uses RealServer.OurDomain.local. Printers also use an alias to get to their server. I plan to use this setup in the event of a significant server problem to switch the dns alias to another server as a poor man's warm standby. Also it should make server migrations in the future very easy. I tried DFS but it didn't give me the comfort that I had exact control over where files were going. The default TTL (Time to Live) for the DNS entries is 1 hour. That's probably what it would take the tech staff to determine

linux - Execution time differs for different users

I am running simple R job by root and another limited user. The execution time differs significantly. What can be the source of problem? Further information Here is how I compare the run time: # time /share/binary/R/bin/R CMD BATCH s1n\=50.R real 0m0.278s user 0m0.217s sys 0m0.032s # su john $ time /share/binary/R/bin/R CMD BATCH s1n\=50.R the run under john user takes a long time and never finishes! The output of perf during these interval is: PerfTop: 906 irqs/sec kernel:19.3% exact: 0.0% [1000Hz cycles], (all, 8 CPUs) ------------------------------------------------------------------------------------------------------------------------------------------------------------- samples pcnt function DSO _______ _____ _____________________________ _______________________________ 598.00 14.5% __GI_vfprintf /lib64/libc-2.12.so 194.00 4.7% intel_idle

networking - What are the 'gotcha's' a software developer needs to look out for if tasked with setting up a production ready server?

As a software developer I am very used to installing my typical stack (java, mysql and tomcat/apache) on my development machine. But setting up and securing a production machine is not something I would feel confident doing. Is there a A-Z guide for dummies for setting up and securing a production server? Is the process very different depending on platform (Windows or Linux)? Are there some general rules that can be applied across different platforms (and application stacks)?

domain name system - When do I need an own DNS server?

Note: I realized belatedly that this is a duplicate of question 23744 , which already has good answers. I couldn't close this post for lack of reputation, maybe someone else could step in. I use hosted servers for my company and although I might opt for colocation of more customized machines in the future, on the whole I'm not too keen on diving too deep into "datacenter business". So generally, I would like to leave the handling of my infrastructure to dedicated pros as much as possible. Recently, I've been starting to lust for some more flexibility regarding the DNS entries for my domain and have looked into running my own name server(s). It seems to me that running a professional, failsafe name server is a little more effort than I'm willing to commit to just now. Still, I like the idea of having a lot of control over it. For a more experienced Sysadmin, what are indications to run own name servers? And when using name servers that are professionally mai

performance - Choosing linux server for mysql - memory clock speed

I have a Linux Mysql server cluster where master and slaves are overloaded by a combination of read/write i/o and by SELECT queries load. We purchased FusionIO cards to replace the hard drives. My question is whether to upgrade the servers themselves, or just replace the hard drives by FusionIO cards in the existing servers. The existing servers have 266 MHz memory (Xeon E5345 - a bit outdated by now), while we can purchase servers with at least 1333 MHz RAM. We would rather not spend money on new servers, since FusionIO cards are already very costly. We tested a FusionIO card in a brand new 1333MHz RAM server, and it gave us 4-5x speedup: but we do not know the contribution of RAM speed vs FusionIO. What are the best practices on Linux to examine whether the RAM speed is a real bottleneck or not?

Trouble with nginx and serving from multiple directories under the same domain

I have nginx setup to serve from /usr/share/nginx/html, and it does this fine. I also want to add it to serve from /home/user/public_html/map on the same domain. So: my.domain.com would get you the files in /usr/share/nginx/html my.domain.com/map would get you the files in /home/user/public_html/map With the below configuration (/etc/nginx/nginx.conf) it appears to be going to my.domain.com/map/map as noticed by this: 2011/03/12 09:50:26 [error] 2626#0: *254 "/home/user/public_html/map/map/index.html" is forbidden (13: Permission denied), client: , server: _, request: "GET /map/ HTTP/1.1", host: " " I've tried a few things but I'm still not able to get it to cooperate, so any help would be greatly appreciated. ####################################################################### # # This is the main Nginx configuration file. # ####################################################################### #--------------------------------------------

hp - Third-party SSD in Proliant g8?

Anyone having success with third-party SATA or SAS drives with Proliant G8's? I know the G7's were flaky and had BIOS issues. We're looking for real-world success stories with particular brands and models of enterprise-class SSDs. (We'd hoped to install some Intel 910 cards, but they're so scarce these days it's impossible to locate them before our implementation deadline.)

Windows command line tool for disk IO monitoring

I am looking at extracting disk IO statistics on Windows 2003 upon occurrence of some events, e.g. a long full GC occurring that is waiting a long time for CPU resource. I have read that Process Explorer allows you to do that, but it is a GUI based application, which entails that you must know when the problem will occur again and you a must already be logged into your server. However, because I do not know when it will happen, I need to write a script to output disk IO statistics when the last GC took more than x seconds. Are there any such command line tools already available out there, such that I can simply call that program to output the results for me? Answer I am not sure about the exact type of data you are trying to collect, but all perfmon counter data is available via the typeperf command line utility even in Server 2003. Sample use List counters available (without instances): typeperf -q sample total CPU usage over 10 seconds once and return: typeperf "

ubuntu - All outgoing mail is marked as spam by gmail

All my emails sent from my SMTP server (created with VespaCP) is being marked as spam by gmail. DNS and DKIM is setup correctly. By using isnotspam.com, I have figured out that the reason is spamassasin giving them a score of 3.7. Here is the report: ---------------------------------------------------------- SpamAssassin check details: ---------------------------------------------------------- SpamAssassin 3.4.1 (2015-04-28) Result: ham (non-spam) (03.7points, 10.0 required) pts rule name description ---- ---------------------- ------------------------------- * 3.5 BAYES_99 BODY: Bayes spam probability is 99 to 100% * [score: 1.0000] * -0.0 SPF_HELO_PASS SPF: HELO matches SPF record * -0.0 SPF_PASS SPF: sender matches SPF record * -0.0 RP_MATCHES_RCVD Envelope sender domain matches handover relay domain * 0.2 BAYES_999 BODY: Bayes spam probability is 99.9 to 100% * [score: 1.0000] * 0.1 HTML_MESSAGE BODY: HTML included in message * -0.1 DKIM_VALID_AU Message has a valid DKIM or DK sign

ZFS on FreeBSD: recovery from data corruption

I have several TBs of very valuable personal data in a zpool which I can not access due to data corruption. The pool was originally set up back in 2009 or so on a FreeBSD 7.2 system running inside a VMWare virtual machine on top of a Ubuntu 8.04 system. The FreeBSD VM is still available and running fine, only the host OS has now changed to Debian 6. The hard drives are made accessible to the guest VM by means of VMWare generic SCSI devices, 12 in total. There are 2 pools: zpool01: 2x 4x 500GB zpool02: 1x 4x 160GB The one that works is empty, the broken one holds all the important data: [user@host~]$ uname -a FreeBSD host.domain 7.2-RELEASE FreeBSD 7.2-RELEASE #0: \ Fri May 1 07:18:07 UTC 2009 \ root@driscoll.cse.buffalo.edu:/usr/obj/usr/src/sys/GENERIC amd64 [user@host ~]$ dmesg | grep ZFS WARNING: ZFS is considered to be an experimental feature in FreeBSD. ZFS filesystem version 6 ZFS storage pool version 6 [user@host ~]$ sudo zpool status pool: zpool0