Skip to main content

Posts

Showing posts from December, 2014

How to connect to a Remote Windows Server via internet without using Static IP?

How to connect to a Remote Windows Server via internet without using Static IP? I have a Windows Server 2008 R2 and i want to make it accessible to my customers so they can work on my server through sessions which they can access from anywhere in the entire world via internet. Common thought solution You might say that purchase a static IP for it so you will be able to access it anywhere in the entire world. Bottleneck I live in India and the only available broadband network near my location is Airtel or BSNL . I already have already purchased a nice plan from Airtel which serves us well until the Internet link is dropped from the ISP side for hours which happens once in a month. As an IT person we know that the server needs to have 24/7 uptime to make itself useful for the customers and if the internet link keeps dropping for hours then it is a major setback. BSNL is prone to have frequent internet link drops and is slower than Airtel and even costs double in comparison to my c

linux - How do I find out what is using up all the space on my / partition?

I am on a large instance on Amazon's EC2 servers. I run the df command and get: root@db:~# df -h Filesystem Size Used Avail Use% Mounted on /dev/sda1 9.9G 9.1G 284M 98% / tmpfs 3.8G 0 3.8G 0% /lib/init/rw varrun 3.8G 116K 3.8G 1% /var/run varlock 3.8G 0 3.8G 0% /var/lock udev 3.8G 80K 3.8G 1% /dev tmpfs 3.8G 0 3.8G 0% /dev/shm /dev/sdb 414G 957M 392G 1% /mnt /dev/sdf 50G 12G 35G 26% /byp /dev/sdk 99G 31G 63G 33% /backups I then run the du command and get: root@db:/# du -s -h /* 31G /backups 5.5M /bin 136K /boot 12G /byp 80K /dev 5.8M /etc 12K /home 70M /lib 11M /lib32 0 /lib64 16K /lost+found 759M /mnt 4.0K /opt du: cannot access `/proc/6917/task/6917/fd/4': No such file or directory du: cannot access `/proc/6917/fd/4': No such file or di

performance - What utilities do you use and how do you use them when benchmarking your harddrive subsystem for a SAN replacement?

So, I have a series of questions into ServerFault about LeftHand SAN's (Questions 1441 and 4478 ). One of the responses I received questioned the throughput of the iSCSI data returning from the SAN being restricted to the 1GB network card. I think it is a very good point to look at. Now to see if this would be an actual bottleneck for my purpose I should perform some benchmarks on the current production server. What utilities should I use for profiling? If there are special metrics I should be reading, what are they? IOPS?, Disk Read/Write B/sec?, etc... Thank you, Keith Answer Iometer appears to be the overwhelming recommendation for testing and benchmarking iSCSI SAN performance. Because it is widely used by vendors, its numbers offer a fairly reliable comparison metric across storage systems. If you plan on running SQL Server off your SAN, you should also look at doing some testing and tuning with SQLIO . Brent Ozar has a great S AN performance tuni

amazon ec2 - Nginx set_real_ip_from AWS ELB load balancer address

I have a set of Nginx servers behind an Amazon ELB load balancer. I am using set_real_ip (from the HttpRealIpModule ) so that I can access the originating client IP address on these servers (for passing through to php-fpm and for use in the HttpGeoIPModule ). It seems that set_real_ip_from in the nginx configuration can only accept an IP address. However, with regard to ELB machines Amazon say: Note: Because the set of IP addresses associated with a LoadBalancer can change over time, you should never create an "A" record with any specific IP address. If you want to use a friendly DNS name for your LoadBalancer instead of the name generated by the Elastic Load Balancing service, you should create a CNAME record for the LoadBalancer DNS name, or use Amazon Route 53 to create a hosted zone. For more information, see the Using Domain Names With Elastic Load Balancing But if I need to input an IP address I can't use a CNAME (either amazon's or my own). Is there a soluti

apache 2.2 - How to Enhance Round Robin DNS Load Balancing?

This is form further to my question at Old Question . Following to the suggestion in the answer I got this text here The DNS based load balancing method shown above does not take care of various potential issues such as unavailable servers (if one server goes down), or DNS caching by other name servers. The DNS server does not have any knowledge of the server availability and will continue to point to an unavailable server. It can only differentiate by IP address, but not by server port. The IP address can also be cached by other nameservers, hence requests may not be sent to the load balancing DNS server. Now I am thinking out of the box, Does these exists some way to deal with potential issues? Answer The answer you've quoted is wrong. such as unavailable servers (if one server goes down) If the user makes a subsequent request after one fails (times out) the resolver on the client should automatically switch to the next entry in the list. or DNS caching by oth

Active Directory Site/Domain Interactions

I'm having an issue wrapping my head around the myriad Active Directory components, and I'm hoping I can get some opinions or corrections from someone. At our workplace, we used to have individual active directory domains and sites for each different office location we have. Each location also had a pair of domain controllers that was responsible for that location, and would be used by any stations within that location to manage authentication, GPOs, etc. Recently, as part of another project, we condensed down all of our domains into our old top level domain. The sites in A/D remained the same, but each individual user account, computer, etc. was moved into the top level domain. Each pair of local DC's was deprecated to just one DC that was a member of the top level domain. After finishing this, we ran into issues where we were getting -very- slow replication between DCs. The secondary issue of that was individual stations or users seemed to be authenticating against any DC

ssh - OpenSSH disable ControlMaster for given hostname

I am using OpenSSH_5.9p1, OpenSSL 0.9.8r 8 Feb 2011 with Mac OS X Snow Leopard. I have ControlMaster feature configured to maintain persistent connections. My ~/.ssh/config have the following: Host * ControlPath /ms/%r@%h:%p ControlMaster auto ControlPersist 4h Host *.unfuddle.com ControlMaster no However, from what I see - even when I am trying to use SSH for unfuddle.com hosts, master connection get always created: [andrey-mbp ~]$ ssh -v git@droolit.unfuddle.com OpenSSH_5.9p1, OpenSSL 0.9.8r 8 Feb 2011 debug1: Reading configuration data /Users/akhkharu/.ssh/config debug1: /Users/akhkharu/.ssh/config line 1: Applying options for * debug1: /Users/akhkharu/.ssh/config line 6: Applying options for *.unfuddle.com debug1: Reading configuration data /usr/local/Cellar/openssh/5.9p1/etc/ssh_config debug1: auto-mux: Trying existing master debug1: Control socket "/ms/git@droolit.unfuddle.com:22" does not exist debug1: Connecting to droolit.unfuddle.com [174.129.5.196] port

apache 2.2 - LXC and port overlapping

I've done a lot of searching on Google and browsed what I could find on Server Fault, but can't find any solution to this. I have a server that is running LXC containers (2 for right now, both Ubuntu). The LXC network is bridged (10.0.3.0/24), with the DHCP server for it being at 10.0.3.1, and I'm using two IPs of that network: 10.0.3.2 (container 1 [CN1]) and 10.0.3.3 (container 2 [CN2]). I have Apache set up on both, and I have a subdomain set up on DNS for a website of mine that points to my public IP (web1 -> CN1 and web2 -> CN2). The subdomains resolve correctly, but here is where the problem starts. Depending on which rule I have first in iptables determines which container is hit with the webpage. So if I have external port 80 hit CN1 first, then that index.html file is shown, and if 80 is set for CN2 first, then that index.html file is shown. What I thought I'd do is set the Apache servers to listen on a different port, so I set CN1 to listen on 801 and

linux - How to Run Multiple MySQL Versions on One Server

How do I run multiple versions of MySQL on one server box? This is on a box running SUSE Enterprise Server 10. The initial installation of MySQL v5.0.45 was done via RPM. I am now being asked to setup a second MySQL running version 5.1.36 to run simultaneously with the previous version. I've downloaded the tarball for the binary distribution of MySQL v5.1.36. I'm looking for specifics in how to set it up as well as any recommendations on managing the two different versions. Where I have one my.cnf or mutiple ones? Should I keep them in /etc or perhaps in the basedir of each MySQL instance? What is the best way to start and shutdown of both servers? etc.? Answer It works fine. Just specify separate conf, port, sock, etc. Personally, I would probably maintain a /etc/my.server1.cnf and /etc/my.server2.cnf for individual server settings. And for startup, just copy /etc/init.d/mysqld (or whatever it is called for Suse), and it should be just a matter of updating

apache 2.2 - What are some reasons why PHP would not log errors?

I'm running Apache2 on Ubuntu 10.04 LTS. I've configured /etc/php5/apache2/php.ini with the following: error_reporting = E_ALL & ~E_DEPRECATED display_errors = Off log_errors = On error_log = /var/log/php_errors.log I have a PHP application that I believe has an error in it. With those settings and the suspected error, no php_errors.log was created. So, I created a PHP file with syntax errors. Still no php_errors.log. What are some reasons why PHP would not log errors? Could it be permissions? I see that /var/log is owned by root:root. I don't know what owns the php process. I believe root starts the apache process and then a new starts a new thread using www-data:www-data for serving files. Would that mean I need to change permissions on the /var/log folder?

Nearline SAS on SATA controllers

I know that SAS drives cannot operate on a SATA controller. I know that SATA drives can operate on a SAS controller. But can Near-Line SAS drives, being SATA drives with a SAS controller, be used on A) only SAS controllers B) both SATA and SAS controllers ? Note that the nearline SAS drives feature "dual port", but I'm unsure if this means "has a SATA and a SAS port" or "has 2 SAS ports". Especially since I seem to read that the connnectors and cables themselves seem to be interchangeable, which, if they are, would make the dual port conundrum moot. Answer "I know that SATA drives can operate on a SAS controller" - you may know many popular SAS controllers than can run SATA disks but don't make this assumption, there's nothing in the SAS specs that state they should also run SATA disks, they just share a lot of common features (connectors/cabling etc.). Manufacturers can pick and choose to do as they wish. Now your que

domain name system - how solve ERROR: No reverse DNS (PTR) entries. The error MX records are:

i have remote access to my windows server 2008 r2(DNS-IIS-FTP-MAIL), please see the link below for my web site : http://www.intodns.com/polyafzar.com how can i fix the error below in my server : ERROR: No reverse DNS (PTR) entries. The problem MX records are: 234.60.7.31.in-addr.arpa -> no reverse (PTR) detected 233.60.7.31.in-addr.arpa -> no reverse (PTR) detected You should contact your ISP and ask him to add a PTR record for your ips thanks in advance Answer Just like it says: "You should contact your ISP and ask him to add a PTR record for your ips" You have to get whomever looks after your hosting to provide rDNS records in order to eliminate this error.

How to configure DNS to point main domain at Firebase hosting, but not subdomain

I currently have a website hosting on CPanel which consists of subdomain.example.com and example.com. I'd like to keep the subdomain hosted where it is, but move the main domain to be hosted with Firebase. I'm struggling to work out the required DNS changes to make this work. If it is relevant, the domain was purchased from 123-reg and points to custom nameservers on my CPanel server. I read this answer , but I've also read that Firebase no longer allows CNAME records and has to be A records. Would it work to have an A record pointing to the IP addresses supplied by Firebase and a CNAME record for the subdomain pointing to the server where it is currently hosted? Answer You need to create two A records: domain.com -> Public IP address of your first server sub.domain.com - Public IP address of your second server

apache 2.2 - WordPress can't install plugin even file permission is correct

I have apache2 server, of course apache2 is running on www-data account. All my WordPress files is own by root:webmaster , and have the g+w permission. Three of the accounts in webmaster group are www-data , sftp_www , and root itself. The permission thing seems to be really, really good. Here are the copied text from the terminal, the same one with the screenshot above. root@srakrn:/var/www/html/blog/wp-content# ls -l total 20 -rw-rw-r-- 1 root webmaster 28 Jan 8 2012 index.php drwxrwsr-x 4 root webmaster 4096 Jun 5 06:38 plugins drwxrwsr-x 5 root webmaster 4096 May 6 18:33 themes drwxrwsr-x 2 root webmaster 4096 Jun 5 06:38 upgrade drwxrwsr-x 3 www-data webmaster 4096 Jun 5 08:55 uploads root@srakrn:/var/www/html/blog/wp-content# groups www-data www-data : www-data webmaster This is what WordPress has asked: the FTP password. Usually WordPress won't ask for FTP password if the directory is writable by WordPress. So, even the plugin folder is writable

HP Proliant DL360 G8 & G9 with SAMSUNG PM863a SSDs

Does anyone use Samsung PM863a SSDs in HP DL360 G8 & G9s - specifically in a Windows Server 2016 Storage Space Direct mirrored implementation ? What is your experience of this ? Having seen other posts about Proliants and 3rd party SSDs on these forums, I am aware of the dangers\limitations of using non-HP disks with Proliants, and the difficulties of getting the right caddies for 3rd party disks, so I'm not interested in any further warnings about these aspects, thanks. Most of these appear to be about using Consumer grade SSDs (e.g. Pro or EVo) as opposed to Enterprise grade (as the PM863a's are) and I should add I have been using Intel 3500 SSDs as mirrored OS disks for the G9 for the past 3 years without caddies and without any issue (except for the loss of monitoring and the constant warning about 'incompatible' disks). I am stuck with the Proliants, possibly for another year or two, however our Software based SAN hardware is very ripe for replacement, so I wo

Unable to ping computers via DNS domain name

I have two desktop computers and recently set up a third computer, a Windows Server 2012 box, for which I installed DNS on it and set it up as a domain controller. I was able to successfully set up each computer to be part of the new domain. There is a router, which goes to the outside world. Issue: I cannot ping the other computer via a fully qualified DNS domain name. I also set up an A-record, which neither computer can ping. Domain: myoffice.com Computers: ComputerA, ComputerB Fully Qualified Names: ComputerA.myoffice.com and ComputerB.myoffice.com Each computer can ping itself using the computer name or its fully qualified name. The server, however can ping either computer. Each computer has the Primary DNS Gateway set to the server's IP address and the Secondary Gateway set to the router's IP address. The router provides all DHCP addresses. I created a new A record in the DNS Manager in [DNS | | Forward Lookup Zones | myoffice.com] setting mail.myoffice.com to the IP add

ftp - gradual increase in capacity of a virtual hdd

Im very new in using ESXI/vSphere and im planning to create an FTP server and i just needed to know some few basic things. How do i gradually increase the disk capacity in a virtualize environment or is it even possible? a typical scenario would be, i initially utilize 10GB storage capacity for FTP server, and as demand increases, i can easily increase it to 20GB. what type of filesystem do i need to achieve this? or do i have to create a virtualize harddisk?

postfix - Blacklist recipient email addresses

I have a mail server on Ubuntu using Postfix and Dovecot and SpamAssassin. In Postfix I am using "plus addressing" via the recipient_delimiter option. So, for example foo_junker@me.com and foo_stupid@me.com are valid and actually deliver to the foo account. Some of those addresses have gotten scraped/sold and are now heavy sources of spam (such as foo_dropbox@me.com ). I'd like to block all email addressed to a specific list of addresses. I edited /etc/spamassassin/local.cf to: Uncomment this line: shortcircuit USER_IN_BLACKLIST_TO on Add this line (above the shortcircuit): blacklist_to foo_dropbox@me.com After I restarted spamassassin ( /etc/init.d/spamassassin restart ) emails sent to the blacklisted address still come through unharmed. How can I hard-nuke these emails so I never see them?

vps - How much memory is required for base lamp setup?

I am planning on renting a VPS. How much memory is required for a base setup of Debian, Apache, Mysql, PHP? By base, I mean not considering traffic (which will be below 1k hits a day). No complicated databases or memory eating scripts. For reference, I would consider 512MB more than I need. (But i'm unsure of how right I am.) Possible duplicate: How much VPS ram would I need to run Wordpress, Apache, SVN & MySQL? The difference would be, I am specifically asking about Debian, Apache, Mysql and PHP by default Debian configuration. No memory tweaking or replacing Apache with a lighter daemon. The other question also has wildly inconsistent answers.

linux - Why would a ssh connection fail to connect with whole proper setups?

Using Ubuntu Server 14.04 I got ssh server running on port 2222 (they asked it like that at building). I already got whole setups reviewed and nothing seems to be ok. This is what's set up: At building they redirect tcp connections on port 2222 to my IP. Same goes for port 80, which I can reach. When I attempt to connect through SSH at port 2222, I can see the incoming network activity using iftop . I got ufw running with proper rules: 2222/TCP ALLOW Anywhere At sshd.conf file I got: I got ssh server listening at port 2222. I set the ssh server in AUTH mode for logging but I can't see anything at /var/log/auth.log when trying to connect. This is my sshd.conf file . Yep, the ssh daemon is running and it was restarted. So anytime I try to connect I get a message like: ssh: connect to host port 2222: Connection refused Running client in verbose mode outputs: debug1: Reading configuration data /etc/ssh/ssh_config debug1: /etc/ssh/ssh_co

virtualization - Where to install ESXi so that all local drives are available to VSAN

I am currently setting up a new 3-host VSAN cluster that will be placed into production soon. (Note this is the newer "Virtual SAN" (VSAN) technology, not the older vSphere Storage Appliance (VSA) technology). This is the first time I have worked with VSAN. Each of the three hosts in the cluster has four 1TB local HDD's and one 200GB local SSD (which VSAN needs for read/write caching) to contribute to the cluster. I have installed ESXi 5.5 directly on the first local HDD of each host. I added the three hosts to vCenter and launched the vSphere Web Client to configure VSAN. But instead of seeing all four local HDD's on each host as being available to VSAN, only three are available. From what I have read in the VSAN documentation, disks used by VSAN must solely be used by VSAN. That is, once VSAN "takes over" a local drive, that drive can not be used for any other purposes (such as reserving partitions for use by other OS's). However, it wasn't clear t

linux - Apache - High Availability

I'm looking for a way to setup Apache as high-availability. The idea is to have a cluster of 2+ Apache servers serving the same websites. I can have the IP address of each server set up with round-robin DNS so that each request is randomly sent to one of the servers in the cluster (I'm not too concerned with load-balancing just yet, though that may come into play later on). I already have it set up and working with multiple Apache VM servers (spread across multiple physical servers) serving websites, and round-robin DNS, and this works fine. The SQL database is set up using MariaDB in a high-availability cluster, the web data (HTML, JS, PHP scripts, images, other assets) are stored within LizardFS, and the sessions are stored in a shared location as well. This all works well until one of the servers in the cluster becomes inaccessible for whatever reason. Then a percentage of the requests (roughly the number of downed servers divided by the number of total servers in the c

mod rewrite - How to check is canonicalized domains are being used? Apache 301 redirect does not preserve referrer

I have multiple domains which are setup to redirect (301) to my main domain. However I know some of these domains have little to no value in terms of SEO and I would like to get rid of them. But a concern of mine is that there may exist backlinks under these domains. I checked google analytics and none of these domains came up, but I decided to confirm they would registered if they were used. Unfortunately in testing, my Apache 301 redirect does not seem to preserve the referring URL. I know this is largely dependent on the client, but it seems the consensus is that most of the time this is preserved. Are there any settings in modern browsers which instruct them to remove a referrer when redirected? I'm getting this behavior in Firefox, Chrome and IE. Is there anything I can do on the server side which may influence a client to preserve the referrer? If this is a dead end, what other methods are there to check if there are any backlinks or usages of these aliased domains? Her

vmware esxi - Which RAID configuration is better for hypervised Virtual Machines with three 300GB SAS drives

Probably a noob question, but I'd like some guidance on which RAID configuration is best in this server scenario. I am aware of and understand the different RAID configurations, but am after guidance in the use of RAID in a hypervised virtual machine configuration specifically. Hardware is an older IBM x3650 with an IBM ServeRAID 8k hardware controller. There is 3x 300GB SAS drives and 2x 146GB SAS drives. I wish to install the free VMware ESXi 6.0 hypervisor on the system and have just found out that I can't boot from USB flash drive. So I'm gonna have to use one of the HDD's or RAID volumes to install the baremetal OS on. I'm only going to be running 2-3 VM Guests (most likely 2x Win Server 2012R2 Std's and maybe 1x Win7 Pro or Linux distro) for basic AD/DC redundancy and a RDS gateway. Do I: setup the 3x 300GB drives in RAID5 and the 2x 146GB drives in RAID1 setup the 3x 300GB drives in RAID1e and the 2x 146GB drives in RAID1 setup the 2x 300GB drives in RA

linux - Server Free Memory

My Redhat server shows the following: free -m ============= total used free shared buffers cached Mem: 8113 8078 35 0 171 6491 -/+ buffers/cache: 1415 6698 Swap: 8189 59 8130 Is 35 mega of free memory considered critical on a production server ?

ubuntu - Bind Slave Server Notify after adding new zone

I am new to DNS Setup, i have recently setup DNS server Master and Slave using bind9. Here is my config... Master DNS - ns1.example.com. - 192.0.2.1 Slave DNS - ns2.example.com. - 192.0.2.2 named.conf.options options { directory "/var/cache/bind"; recursion no; allow-transfer { none; }; dnssec-validation auto; auth-nxdomain no; # conform to RFC1035 listen-on-v6 { any; }; }; named.conf.local zone "example.com" { type master; file "/etc/bind/zones/db.example.com"; allow-transfer { 192.0.2.2; }; }; Slave named.conf.local zone "example.com" { type slave; file "db.example.com"; masters { 192.0.2.1; }; }; This setup works perfect, but now i want to add another domain. So should i only update named.conf.local at master Sever and DNS server would be notified automatically... For example... zone "example2.com" { type master; file "/etc/bind/zones/d

memory - ASP.NET not seeing RAM Upgrade?

For several years we've hosted an ASP.NET 4.5 application on the same VM as a SQL Server 2008R2 database in 4GB of RAM. Performance was good. Our application is a catalog and we use .NET memory cache heavily to build up a 'working set' of parts and related data. 80,000-90,000 cache entries is typical. Over the past weekend we upgraded to 8GB of RAM and we're seeing odd memory behavior with the ASP.NET application. After the upgrade, Task Manager tells us that we're only using 60% of the RAM. SQL is very responsive. But cache entries grow to 15,000 and then get trimmed back to 7-8,000 range. There is lots of GC activity. It's as if the ASP.NET application is under memory pressure, and yet there's another 3+ GB of unused RAM out there. Why would this be? Everything is 64bit. Nothing else has changed. There are no memory limits set on SQL or the Application Pool. The application is not recycling, just trimming cache very aggressively. Any ideas? Answer

ssd - LSI 9207-8i and Samsung 850 PRO TRIM support

A Dell R610 server with LSI 9207-8i HBA card has 6 Samsung 850 PRO SSDs connected to it. hdparm shows TRIM support: sudo hdparm -I /dev/sdc | grep -i trim * Data Set Management TRIM supported (limit 8 blocks) However executing the Samsung magician software on Ubuntu 14.04 returns the following error: ERROR : This feature is not supported for disks connected to LSI RAID HBA Cards. Neither does the fstrim command help: fstrim: /: FITRIM ioctl failed: Operation not supported The compatibility matrix doesn't list the Samsung 850 PRO so should I get another controller that supports this SSD for TRIM to work? I do not need any hardware RAID capabilities and intend to configure these 6 drives with RAID 10 using mdadm .

Allow Permissions for Multiple FTP Users to Edit Public HTML Folder - Linux

On a dedicated server hosting one website, running Linux with no control panel, I'm trying to grant permission for multiple users to edit the public HTML folder (/var/www/html/). I want to make sure I do this the best way, I imagine if I set the permissions loosely via CHMOD, it will allow anyone who can reach the folder via FTP to change it. Is the solution to set up a wheel group, add the intended users to the wheel group, and then set the permissions to the wheel group? Right now, only one user can edit the public HTML. Answer Your solution is pretty close to what I'd recommend, but why do you specifically reference the wheel group? On many distributions the group wheel has full access to the sudo command granting then full root access to the system. Let's assume you make a new group called webadmins : groupadd webadmins Then you want to set the proper permissions to allow your webadmins to make changes: chown root:webadmins /var/www/html -R The -R wil

Can you help me with my capacity planning?

This is a canonical question about capacity planning Related: I have a question regarding capacity planning. Can the Server Fault community please help with the following: What kind of server do I need to handle some number of users? How many users can a server with some specifications handle? Will some server configuration be fast enough for my use case ? I'm building a social networking site: what kind of hardware do I need? How much bandwidth do I need for some project ? How much bandwidth will some number of users use in some application ? Answer The Server Fault community generally can't help you with capacity planning - the best answer we can offer is "Benchmark your code on hardware similar to what you'll be using in production, identify any bottlenecks, then determine how much of a workload your current hardware can handle, and/or how much hardware horsepower you need to handle your target workload" . There are a number

linux - Max file descriptor limit not sticking

Our cloud based systems use Java and require greater than 1024 for the max file descriptor limit. On one of our virtual systems, every time we try to make this change, I can get it to change and it will be persistent across the first reboot (have not tested multiple), but if we stop and start our java app, the limit seems to get reset to 1024. System info: Linux mx.xen16.node01002 3.1.0-1.2-xen #1 SMP Thu Nov 3 14:45:45 UTC 2011 (187dde0) x86_64 GNU/Linux Here are the steps I took: edited /etc/sysctl.conf and appended fs.file-max = 4096 Before applying, checked the limit for the process (PID 1530): root 1530 6.7 31.0 1351472 165244 pts/0 Sl 17:12 0:58 java cat /proc/1530/limits Limit Soft Limit Hard Limit Units Max cpu time unlimited unlimited seconds Max file size unlimited unlimited bytes Max data size unlimited unlimited bytes Max

domain name system - How to make my FreeBSD machine visible automatically on Mac within same local network?

The Mac machines within a local network is visible automatically to the other Macs. They're visible on Finder and can be accessed with their names via console. As I know this is multicast-local-DNS. I want to make my FreeBSD machine is visible from my Mac. I just want to connect to there with it's hostname for SSH. Is there a simple solution for this? I tried hosts file, but it was not a good idea because the host address are configured by DHCP so not guaranteed.

linux - Removing Unnecessary Services & Packages in a MySQL Ubuntu 12.04 Server

As part of hardening a standalone/dedicated MySQL 5.6 server running on Ubuntu 12.04 LTS, unnecessary services and packages will have to be removed. For a server that is serving only as a MySQL server, what services and packages should we remove? Is there a list of services/packages that we can use? Here's a list of services running (?). Which are the ones them look like they could be stopped and their packages removed? [ ? ] acpid [ ? ] anacron [ ? ] atd [ - ] bootlogd [ ? ] console-setup [ ? ] cron [ ? ] cryptdisks [ ? ] cryptdisks-early [ ? ] cryptdisks-enable [ ? ] cryptdisks-udev [ ? ] dbus [ ? ] dmesg [ - ] grub-common [ ? ] hostname [ ? ] hwclock [ ? ] hwclock-save [ - ] keymap.sh [ ? ] killprocs [ ? ] module-init-tools [ ? ] network-interface [ ? ] network-interface-container [ ? ] network-interface-security [ ? ] networking [ ? ] ondemand [ ? ] passwd [ ? ] plymouth [ ? ] plymouth-log [ ? ] plymouth-ready [ ? ] plymouth-splash [ ? ] plymouth-

Offsite Backup Solution - RAID with LVM for GNU/Linux server

Background Hello, I am setting up a Ubuntu GNU/Linux server which will combine: 1) Software RAID1 (using mdadm) - To provide data protection against hardware failure 2) Logical Volume Manager (LVM) - Allowing flexibilty in organising my data and the ability to easily add more capacity in the future. So far I have successfully: 1) Set up RAID1 using mdadm and created /dev/md0 2) Set up LVM making /dev/md0 a Physical Volume attached to a Volume Group called: vg_data. I have a Logical Volume called: lv_shared mounted on /home/shared : NAME FSTYPE LABEL UUID MOUNTPOINT sda └─sda1 ext4 0xxxxxxx-2xxx-4xxx-8xxx-1xxxxxxxxxxx / sdb └─sdb1 linux_raid_member ubuntu:0 02342342-2333-4444-8888-111111111111 └─md0 LVM2_member 57e241ad-aee3-4486-8eaa-222222222222 └─vg_data-lv_shared ext4 048b529c-2e39-4f49-83c9-333333333333 /home/shared sdc └─sdc1 linux_raid_member ubuntu:0

virtualhost - How to assign multiple dedicated IP addresses and domains to the same directive in Apache?

I am setting up a Magento Multi-Site/Multi-Store structure and the way it works is it sets an environmental variable depending on which domain it is on (distinguishes store); so basically all the domains are pointed to the same directory. The trouble is I need each site domain to use a different IP address for other purposes. I will also needs to install SSL for each store. Does anyone know if this is possible and how I would go about doing this? If it is any helps the server is Apache 2.2 and its a WHM/cPanel setup. Thanks Answer What you want is called Virtual Hosts . It can be used to give each IP/port a seperate site, including SSL certificate, WWWRoot, etc. --Christopher Karel

linux - Hardening Apache server

I want to learn about hardening and securing Apache server. Can anybody suggest me very detailed web resource. I also want to learn history of different vulnerabilities existed in Apache, possible attack against them and how to mitigate them. I required this for both Windows and Linux platforms. Anything else which you think I should know from security perspective is welcomed. (I am a student. I don't have industry experience. This question is asked before but I think the answers are for working professionals.)

Raid Ready Hard Drive vs Standard Hard drives

What is the difference between normal data hard drives and Raid Ready Hard Drives? I am planning to build a server with 6x2TB hard drives and i have been quoted raid ready Hard drives nearly at 250$ per 2 TB hdd, what is the advantage of going for these over lets say 70$ 2TB Hard drives. I am planning a file server for a small office round about 20-> 30 people Maximum no big files mainly documents and then overnight backup of some VMware images. I am looking for the most cost effective way to do this. should i really invest in "raid ready" hard drives or is it just a marketing act. My Provider of the parts told me if i go for standard hard drives " you will risk compromising data integrity" is this true? thanks in advance. Answer RAID drives typically are of a higher quality but also have different error handling characteristics. There is a good article at http://en.wikipedia.org/wiki/Time-Limited_Error_Recovery which describes the Western Digital

Bad disk performance on HP DL360 with Smart Array P400i RAID controller

I have a HP DL360 server with 4x 146GB SAS disks and a Smart Array P400i RAID controller with 256MB cache. The disks are in RAID 5 (3 disks + 1 hot spare). The server is running VMware ESX 3i. The disk write performance is really bad. Here are some numbers: ns1:~# hdparm -tT /dev/sda /dev/sda: Timing cached reads: 3364 MB in 2.00 seconds = 1685.69 MB/sec Timing buffered disk reads: 18 MB in 3.79 seconds = 4.75 MB/sec ns1:~# time sh -c "dd if=/dev/zero of=ddfile bs=8k count=125000 && sync" 125000+0 records in 125000+0 records out 1024000000 bytes (1.0 GB) copied, 282.307 s, 3.6 MB/s real 4m52.003s user 0m2.160s sys 3m10.796s Compared to another server those number are terrible: Dell R200, 2x 500GB SATA disks, PERC raid controller (disks are mirrored). web4:~# hdparm -tT /dev/sda /dev/sda: Timing cached reads: 6584 MB in 2.00 seconds = 3297.79 MB/sec Timing buffered disk reads: 316 MB in 3.02 seconds = 104.79 MB/sec web4:~#

ssl - Setting up Wildcard subdomain (with reverse proxy) on apache 2.2.3

What I am trying to achieve is the following: I want to have numerous subdomains such as abc.domain.com redirect to a url such as www.domain.com/something?subdomain=abc Since I am redirecting to a fully qualified domain, I needed to use a reverse proxy to avoid the change of the URL in the browser. (using the [P] Flag and turning on the mod_proxy module and some other modules) This is my DNS setup *.domain.com. 14400 A 111.111.11.1 This is my virtual host configuration for apache ServerName www.domain.com ServerAlias *.lionite.com DocumentRoot /var/www/html ErrorLog /var/www/logs UseCanonicalName off RewriteEngine on RewriteCond %{REQUEST_URI} !^/images RewriteCond %{HTTP_HOST} !^www\.domain\.com$ RewriteRule ^(.+) %{HTTP_HOST}$1 [C] RewriteRule ^([^.]+)\.domain\.com(.*) http://www.domain.com/something?subdomain=$1 [P,L] This setup is working fine (Let me know if you think you can improve it of course). My main problem is when I am tr

virtualization - Increasing vCPUs for Server 2003?

I have a VM running Server 2003 and SQL Server 2008 on VMware ESXi. Can I safely assume this configuration can cope with a change to the number of vCPUs? Are there better solutions to address the demands of the guest? More info below... After an upgrade to software utilising SQL, I'm seeing a notable increase of CPU demand from the SQL server, often up to 100% for a few minutes at a time. Users are complaining of slow response from the software. The VM has only one vCPU assigned. My proposed solution is to increase the number of vCPUs assigned to the VM. Does anyone have any experience in increasing the amount of vCPUs with this configuration? The only thing that scares me is the warning in vSphere that a change "after the guest OS is installed may make your virtual machine unstable". Reading responses to similar questions, it seems it should work without incident. Would it be best to also reserve some CPU for this VM or make a change to the processing priority via the re