Skip to main content

Posts

Showing posts from February, 2015

domain name system - CNAME vs A records

I built a small rails app that allow users to make a simple site. It uses subdomain accounts ex: deb.myapp.com Whenever an user wanted to have a domain name associated with their site, they would change their NS records to point to slicehost where the application is hosted and I would manage the DNS records myself. However, as more people are using the application this is not an option for me anymore. I prefer users to keep their nameservers at goddady, register.com, etc, so they can log in and manage their own MX records or whatever else they need to change. My question is, should I have them change the A records to point to my server's ip, or should I have them create a CNAME record? Do they need to delete the default A records to allow the CNAME record to work? Will the A record take precedence and overrule the CNAME record? Thanks in advance. Sorry if this is a very basic question. I've read other posts and I can't find a definite answer. Answer A CNAME Re

virtualization - Consumer (or prosumer) SSD's vs. fast HDD in a server environment

What are the pro's and con's of consumer SSDs vs. fast 10-15k spinning drives in a server environment? We cannot use enterprise SSDs in our case as they are prohibitively expensive. Here's some notes about our particular use case: Hypervisor with 5-10 VM's max. No individual VM will be crazy i/o intensive. Internal RAID 10, no SAN/NAS... I know that enterprise SSDs: are rated for longer lifespans and perform more consistently over long periods than consumer SSDs... but does that mean consumer SSDs are completely unsuitable for a server environment, or will they still perform better than fast spinning drives? Since we're protected via RAID/backup, I'm more concerned about performance over lifespan (as long as lifespan isn't expected to be crazy low). Answer Note: This answer is specific to the server components described in the OP's comment. Compatibility is going to dictate everything here. Dell PERC array controllers are LSI devices. So

virtualization - Routing single v6 /64 range to virtual machines

I am building a virtualisation host, and I want my virtual machines to be available via both v4 and v6 IP address. The host I have (Xen 4.1.3 with Debian Wheezy in dom0) has one physical eth0 interface: 10.0.0.2/30 dev eth0 default via 10.0.0.1 2000:1111:1111:11111::2/64 dev eth0 default via 2000:1111:1111:11111::1 (aka fe80::1) My ISP has assigned me a 10.100.0.0/28 IPv4 range, statically routed via 10.0.0.2 . On the host, I have built xenbr0 virtual bridge interface: 10.100.0.1/28 dev xenbr0 On each guest VM, I set any of unused addresses from 10.100.0.0/28 , i.e: 10.100.0.2/28 dev eth0 default via 10.100.0.1 As expected (since host acts like a classic router), VM's are able to talk to the v4 internet without a hitch. That's where my lack of experience with IPv6 kick in. From my understanding v6 addresses are routed pretty much the same like their v4 counterparts, which means that what I want to accomplish is impossible with only one /64 range (at least while eth0 & v6 g

apache 2.2 - Multiple SSL domains on the same IP address and same port?

This is a Canonical Question about Hosting multiple SSL websites on the same IP. I was under the impression that each SSL Certificate required it's own unique IP Address/Port combination. But the answer to a previous question I posted is at odds with this claim. Using information from that Question, I was able to get multiple SSL certificates to work on the same IP address and on port 443. I am very confused as to why this works given the assumption above and reinforced by others that each SSL domain website on the same server requires its own IP/Port. I am suspicious that I did something wrong. Can multiple SSL Certificates be used this way? Answer For the most up-to-date information on Apache and SNI, including additional HTTP-Specific RFCs, please refer to the Apache Wiki FYsI: "Multiple (different) SSL certificates on one IP" is brought to you by the magic of TLS Upgrading. It works with newer Apache servers (2.2.x) and reasonably recent b

domain name system - Intermittent recursive/iterative DNS query failure

I have a problem issuing queries to a DNS and I'm not sure where to look for the underlying cause. I have a record "www.alumninews.uottawa.ca" which is a CNAME record which points to an A record for "uottawa.mailoutinteractive.com" which I host. When I query my ISP's DNS servers, I get different responses: The first does not recurse $ dig +recurse www.alumninews.uottawa.ca @64.59.184.13 ; <<>> DiG 9.8.1-P1 <<>> +recurse www.alumninews.uottawa.ca @64.59.184.13 ;; global options: +cmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NXDOMAIN, id: 13260 ;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 0 ;; QUESTION SECTION: ;www.alumninews.uottawa.ca. IN A ;; ANSWER SECTION: www.alumninews.uottawa.ca. 3600 IN CNAME uottawa.mailoutinteractive.com. ;; Query time: 139 msec ;; SERVER: 64.59.184.13#53(64.59.184.13) ;; WHEN: Wed Apr 3 11:33:55 2013 ;; MSG SIZE rcvd: 87 Note that the CNAME does not get

windows - Why does accessing a folder via UNC path share not work but mapping the same path as a drive does?

I have two domains, PRIMARY and EXTERNAL. EXTERNAL has a one-way outgoing trust to PRIMARY so that PRIMARY's users can authenticate in EXTERNAL. Both domains have Windows Server 2008 DCs running at the Windows Server 2003 compatibility level. PRIMARY users are generally stripped of their authentication privileges in EXTERNAL (including PRIMARY domain admins) but the few users with explicit access get the authentication privileges granted. The EXTERNAL domain controller has a share called Projects on which everyone has full access. The folder is then locked down with ACLs to only allow a few of EXTERNAL's administrative groups. A few levels down in this folder hierarchy, there is a folder where a user (TESTUSER) in PRIMARY is given modify access. The UNC folder path to this folder is \\EXTERNAL-DC\Projects\A\B\C\Target . When PRIMARY\TESTUSER is logged into a PRIMARY domain-mapped computer with Windows 7, trying to go directly to the path does not work. ("[unc path][new lin

linux - Log every IP connecting on a system with iptables

Title says it all. How can I, with iptables under Linux, log all IP connecting to a server? As a little detail, I'd like to have only ONE entry in the log PER DAY PER IP. Thanks :) EDIT: I narrowed it down to 5 packets logged for every new session which is weird since I use --hashlimit 1 --haslimit-burst 1 , I suspect that --m limit which defaults to 5 plays a role in there. Trouble is, if I set --m limit to 1, only 1 entry is logged for ALL IP instead one per EACH IP. The reason I want to do this is also to avoid as much as possible logs growing too fast since this will be a rather unmanaged box. EDIT2: Here is my current try, in a iptables-restore format: (on several lines for ease of reading) -A FORWARD -d 10.x.x.x -p tcp --dport 443 -m state --state NEW -m hashlimit --hashlimit-upto 1/min --hashlimit-burst 1 --hashlimit-mode srcip --hashlimit-name denied-client -j LOG --log-prefix "iptables (denied client): " Answer I would try this: # IP address entry o

attach / detach mssql 2008 sql server manager

An external consult wrote a guide on how to copy a database. Step two was detach the database using Sql Server Manager. After the detach the database was not visible in the SQL Server Manager... Not much to do but write a mail to the service provider asking to have the database attached again. The service porviders answer: Not posisble to attach again since the SQL Server security has been violated". Rolling back to last backup is not the option I want to use. Can any one give feedback if this seems logic and reasonable to assume that a detached database in a SQL Server 2008 accessed through SQL Server Manager cannot be reattached. It was done by rightclicking the database and choosing detach. -- update -- Based on the comments below I update the question with the server setup. There are two dedicated servers: srv1: Web server with remote desktop and an Sql Server Manager srv2: Sql server that can be accessed through the Sql Server Manager on the web server -- update2 -- After a r

domain name system - Windows 2016 DNS Server: not using forwarder when recursively resolving CNAME in delegated zone?

I don't think I'm going mad here... Our AD domain controllers (Server 2016) are the DNS servers for foo.example . Within that, we have a delegation, r53.foo.example , which points out to the nameservers for that zone in Amazon Route 53. One of the records in the Route 53 zone is a CNAME to an EC2 instance's public DNS name, i.e. bar.r53.foo.example IN A ec2-1-2-3-4.us-west-1.compute.amazonaws.com. The Windows DNS server is set to use Google public DNS servers as forwarders, and root hints are disabled. Recursion is enabled. From a client, if I query ec2-1-2-3-4.us-west-1.compute.amazonaws.com , it resolves correctly. Then, clear all the DNS caches. If I now query bar.r53.foo.example , the Windows DNS server will query the delegated zone's DNS server (because of the delegation), and get the CNAME result, but that upstream server doesn't recursively resolve the A record. Windows then sends an A record query to the delegated zone's nameserver - and not the NS for u

domain name system - Run antivirus software on linux DNS servers. Does it make sense?

During a recent audit we were requested to install antivirus software on our DNS servers that are running linux (bind9). The servers were not compromised during the penetration testing but this was one of the recommendations given. Usually linux antivirus software is installed to scan traffic destined to users, so what's the goal to install antivirus on a dns server? What is your opinion on the proposal? Do you actually run antivirus software on your linux servers? If so, which antivirus software would you recommend or you are currently using? Answer One aspect of this is that recommending "anti-virus" to be on everything is a safe bet, for the auditor. Security audits aren't entirely about actual technical safety. Often they are also about limiting liability in case of a lawsuit. Let's say your company was hacked and a class action lawsuit was filed against you. Your specific liability can be mitigated based on how well you followed industry st

domain name system - DNS setup for website on geo redundant servers

Let's say: I have a website written in English and shown on www.example.com. The website is on a US server now (based on cPanel/WHM) at the IP address 192.0.2.0. I can manage the DNS of example.com using a control panel to add/modify any records: A, MX, etc. (Currently, all A records are obviously pointing to 192.0.2.0.) I would like that when a person in the US visits www.example.com the website is shown by the server in US, but when a person in Europe visits, the website is shown by a server in the UK. Is this possible using the same domain (with no sub domain redirections such as us.example.com and uk.example.com) by simply adding/modifying DNS records? If (1) is YES, how do I set up the DNS records of www.example.com in order to accomplish this? If (1) is NO, are there other solutions available to accomplish this, and what are these solutions? Answer You can use Anycast DNS to achive what you want. That way people living in the USA will get the reply from an USA

kvm virtualization - CentOS7: KVM: error: Cannot create user runtime directory '/run/user/0/libvirt': Permission denied

Been trying to resolve an issue I found by having our Nagios installation use a plugin for KVM, check_kvm, I found. I think my problem boils down to a permissions issue with the nagios/nrpe user. After installing nrpe and plugins, I do not have any issues with other standard plugins like check_disk or check_load, etc. Basically, the kvm plugin is using virsh to check status, so I enabled login for nrpe (also tried the nagios user, but it appears the service is running under nrpe user) and tried the following: [root@vhost3 ~]# su nrpe sh-4.2$ virsh list --all error: failed to connect to the hypervisor error: no valid connection error: Cannot create user runtime directory '/run/user/0/libvirt': Permission denied But no problem with this command as root of course and the plugin executes well when trying locally: [root@vhost3 ~]# virsh list --all Id Name State ---------------------------------------------------- 2 www ru

capacity - Apache: "Server seems busy", but lots of idle processes

I should note that I'm not a sysadmin. You'll figure that out very shortly. :) In a nutshell: Apache keeps taking a breather during heavy loads and all processes go idle. This is a polling server that is used by applications. The polls come from a lot of different endpoints. From time to time (every 4-5 minutes) if I'm watching top, HTTPD processes go idle all at the same time, stalling traffic for 10 seconds or so. It then recovers. The delay is problematic. Server is serving a lot of traffic. These are application polls via HTTPS, not web pages (though I doubt Apache knows the difference) The pauses noted above cause the traffic to become lopsided: after some time, I get a WHOLE BUNCH OF TRAFFIC, then a lull, then a WHOLE BUNCH OF TRAFFIC again Each poll requires a small database dip Apache logs Sometimes , but not always (mostly after a restart), I get these messages in error_log. Most of the time when it happens, I see nothing in the error_log. [Mon Jun 30 1

What am I looking for in a Monitoring Solution?

This is a Canonical Question about Monitoring Software. Also Related: What tool do you use to monitor your servers? I need to monitor my servers; what do I need to consider when deciding on a monitoring solution? Answer There are a lot of monitoring solutions out there. Everyone has their preference and each business has its own needs, so there is no correct answer. However, I can help you figure out what you might want to look for in choosing a monitoring solution. In general monitoring systems serve two primary purposes. The first is to collect and store data over time. For example, you might want to collect CPU utilization and graph it over time. The second purpose is to alert when things are either not responding or are not within certain thresholds. For example, you might want alerts if a certain server can't be reached by pings or if CPU utilization is above a certain percentage. There are also log monitoring systems such as Splunk but I am treati

HP Proliant DL380 G7 compatible with Kingston SKC1000H/960G PCIe SSD NVME?

Hi to all the experts out there. Can anyone confirm if they've successfully installed and operated the Kingston SKC1000H/960G PCIe SSD NVME card in an HP Proliant DL380 G7 Server? If so, is it just plug-and-play? Does the extra capacity just become available as a new volume? Despite being out of drive bays, we are looking for a performance upgrade on our SQL service and have had excellent experience with SSD drives in the past, but we have never used a PCIe solution before. (Suggestions for alternatives to the Kingston device would be welcome too if anyone has such experience) Thanks in advance.

domain name system - How to get subdomain in Route 53 to resolve to Internet-facing Elastic Load Balancer?

I own a domain, call it doggos.lol that uses Route 53 for DNS. I want to create a subdomain elb.doggos.lol that resolves to the public DNS of an ELB. I created a CNAME to route elb.doggos.lol to an Alias target (the ELB public DNS). I saved the record but the route is not working. If I execute an HTTP request against the public DNS of the ELB, I get the correct REST response from the server it sends to. However, if I go to the subdomain in the CNAME record, I get DNS_PROBE_FINISHED_NXDOMAIN. Testing the CNAME record on Route 53 returns a REFUSED DNS response code. Am I missing something? Answer Turns out for Alias targets, you must use an A record (or AAAA for IPv6). I switched the record from CNAME to A and this resolved the problem. https://aws.amazon.com/premiumsupport/knowledge-center/route-53-create-alias-records/

hardware - HP plan to restrict access to ProLiant server firmware - consequences?

I've been a longtime advocate for HP ProLiant servers in my system environments. The platform has been the basis of my infrastructure designs across several industries for the past 12 years. The main selling points of ProLiant hardware have been long-lasting product lines with predictable component options, easy-to-navigate product specifications (Quickspecs), robust support channels and an aggressive firmware release/update schedule for the duration of a product's lifecycle. This benefits the use of HP gear in primary and secondary markets. Used and late-model equipment can be given a new life with additional parts or through swapping/upgrading as component costs decline. One of the unique attributes of HP firmware is the tendency to introduce new functionality along with bugfixes in firmware releases. I've seen Smart Array RAID controllers gain new capabilities, server platforms acquire support for newer operating systems, serious performance issues resolved; all through

Linux NFS - set default user for new files on nfs share

I use CentOs as nfs server nad 2 Centos machines as clients. I have some problems with permisions/ownership for new files/directories created from clients on nfs share. My exports file: /media/nfsshare *(rw,sync,no_root_squash) And my idmap.conf: [Mapping] Nobody-User = nobody Nobody-Group = nobody Finally, fstab on clients: 172.18.2.132:/media/nfsshare /shared-disk nfs rw,addr= 0 0 I set /shared-disk permissions to 777 and all clients can create/delete files on mounted share. But: I don't want 777 permissions. I rather need 660 Every file created by clients has owner: '-2 - user #-2' and group '-2'. I want to ownership for user who created file - system users for each client has the same ids, groups and group ids. Any tips?

Many domains/sites hosted on same server, CNAME alternatives to avoid writing same IP in DNS?

I have many sites (each one with its own domain) all on the same cPanel hosted server (let's say server IP is 1.1.1.1 and server main domain is myserver.com ) All these domains use third party DNS (not the cPanel hosted ones) , I set up the DNS of each one of these domain to point to server IP. Example of how each domain DNS is currently set: domainx.com -> A -> 1.1.1.1 domainx.com -> MX -> mail.domainx.com mail.domainx.com -> A -> 1.1.1.1 www.domainx.com -> CNAME -> domainx.com ftp.domainx.com -> CNAME -> domainx.com This situation obliges me to repeat hundreds times the server IP 1.1.1.1 one time for each domain. In the event that server IP changes I will have to go through each domain DNS to update records with new IP. So I thought why not use CNAME to avoid rewriting server IP everywhere?! I could set each domain DNS like the following: domainx.com -> CNAME -> myserver.com domainx.com -> MX -> mail.myserver.com mail.domainx.com ->

Can ping, can establish SSH connection in one way but not on other way

First of all, sorry for my English. We're facing a very strange problem with SSH connection between two specific servers. Let's say we have X1, X2 and Y servers. Where X1 and X2 are behind the same firewall, have installed the same operating system, use same configurations for everything that's possibly related to the situation. We don't have any rule set to allow or block only certain IPs or whatever on IPtables on server Y, but anyway... X1 and X2 servers communicate to the exterior using the same IP address. PROBLEM: Server X1 cannot connect to server Y via SSH. It gets a response on ping, but nothing else, no other service on any other port succeeds to connect. X2 or any other server succeeds to connect to X1 and X1 succeeds to connect to any other server except Y1. [root@X1]# ssh -v root@Y1 OpenSSH_4.3p2, OpenSSL 0.9.8e-fips-rhel5 01 Jul 2008 debug1: Reading configuration data /etc/ssh/ssh_config debug1: Applying options for * debug1: Connecting to Y1 [Y1] port 22.

Best way to install multiple versions of Apache, PHP and MySQL on a single FreeBSD host

I want a test- and development-environment for web using Apache, PHP and MySQL. I need to be able to test a single web-application with multiple versions of PHP (5.2, 5.3, etc) and multiple versions of MySQL (5.0, 5.1, 5.5, etc). It shall be hosted on a FreeBSD server. My idea is to compile each version into a directory structure and running them on separate portnumbers. For example: opt/apache2.2-php5.2-mysql-5.0 (httpd on port 8801, mysql on port 8802) (directory contains each software, compiled and linked towards eachother) opt/apache2.2-php5.3-mysql-5.1 (httpd on port 8803, mysql on port 8804) (and so on) Any thoughts or suggestions of the best way to setup this type of environment? UPDATE (background information): The environment would be for education. I have x00 students who develop webapplications and they have a directory where they store all their code (HTML, CSS, PHP, SQL etc). I would like to give them an easy way to test their applications on various versions of PHP and M

windows 7 - Automatically Configure New Computers

My company is in the process of upgrading all of our users from old Windows XP computers to newer quad-core Win7 computers. This is a good thing - it's long overdue that we upgrade our workstations - but I now spend a ton of time configuring new computers. Is there any way to automate this process? The steps that I go through with just about every computer: Run through Win7 setup process (we do mostly HPs, so we get the stupid "The computer is personal again" thing. Uninstall bloatware (norton, bing bar, roxio, etc.) Install Updates Add to domain & configure network settings Install Office, and other company-specific applications Configure important shortcuts (Outlook on task bar) There's a couple other things that I do after that that would be nice to automate, but it's unlikely due to license keys, passwords, etc. Configure Outlook Pull in files/settings with easy transfer wizard Map network drives I know that it's possible to create a complete image

performance - How can a single disk in a hardware SATA RAID-10 array bring the entire array to a screeching halt?

I'm a code-monkey that's increasingly taken on SysAdmin duties for my small company. My code is our product, and increasingly we provide the same app as SaaS. About 18 months ago I moved our servers from a premium hosting centric vendor to a barebones rack pusher in a tier IV data center. (Literally across the street.) This ment doing much more ourselves--things like networking, storage and monitoring. As part the big move, to replace our leased direct attached storage from the hosting company, I built a 9TB two-node NAS based on SuperMicro chassises, 3ware RAID cards, Ubuntu 10.04, two dozen SATA disks, DRBD and . It's all lovingly documented in three blog posts: Building up & testing a new 9TB SATA RAID10 NFSv4 NAS: Part I , Part II and Part III . We also setup a Cacit monitoring system. Recently we've been adding more and more data points, like SMART values. I could not have done all this without the awesome boffins at ServerFault . It's been a fu

email - Using a CNAME to forward traffic from a naked domain

I have a domain, flyh2.com. I use a CNAME to forward www.flyh2.com to flyh2.elasticbeanstalk.com where my web site is hosted. This is the only way Amazon allows custom domain names. A records aren't allowed. I'd like people to simply type http://flyh2.com (without the www) and still have visitors see my web site. Originally I used CNAME to forward both the naked and the www records to my web site, but it seemed to cause problems. Incoming mail was being returned to sender: Fwd: Returned mail: see transcript for details ... Deferred: Connection timed out with flyh2.elasticbeanstalk.com. Message could not be delivered for 6 hours Message will be deleted from queue Seems that the CNAME on the naked domain was overriding the MX records. Now I've changed the CNAME on the flyh2.com record to point to www.flyh2.com and in turn www.flyh2.com to CNAME to flyh2.elasticbeanstalk.com. My MX records are set up correctly, but the CNAME on the naked domain seems to override them.

How do I replace the root filesystem (ext4) with a new one in LVM with one which has more inodes?

I have a few systems which have been running over a decade in a cluster on SLES 10 (now long past EOL). We're migrating to CentOS 6 64-bit. I got everything done but the final data syncs, and lo and behold, surprise, I ran out of disk space...except it's in the inode table, not the raw capacity. ReiserFS (in use on the SLES boxes) did not enforce a limit - indeed, I don't even know how many inodes there are in use because it not only doesn't enforce, it doesn't even track/report them. I can get that number with a one-liner, no problem. My issue largely revolves around LVM, probably. That's my weak spot. I'm just really -fairly- new to using it, having mostly used raw devices since 1993. What I have is a new machine with a logical volume group, containing a swap partition and the root filesystem as two volumes. It's a whopping 100GB, but it needs to have well over 6.5mil inodes...I ran out around 6.4mil. I understand fully that I need to get a tot

amazon ec2 - disk space keeps filling up on EC2 instance with no apperent files/directories

How come os shows 6.5G used but I see only 3.6G in files/directories? Running as root on an Amazon Linux AMI (seems like Centos), lots of free memory available, no swapping going on, no apparent file descriptors issue. The only thing I can think of is a log file that was deleted while applications append to it. Disk space usage is slowly but continuously rising towards full capacity (~1k/min with very small decreases from time to time) Any explanation? Solution? du --max-depth=1 -h / 1.2G /usr 4.0K /cgroup 22M /lib64 11M /sbin 19M /etc 52K /dev 2.1G /var 4.0K /media 0 /sys 4.0K /selinux du: cannot access /proc/14024/task/14024/fd/4': No such file or directory du: cannot access /proc/14024/task/14024/fdinfo/4': No such file or directory du: cannot access /proc/14024/fd/4': No such file or directory du: cannot access /proc/14024/fdinfo/4': No such file or directory 0 /proc 18M /home 4.0K /logs 8.1M /bin 16K

Can I compromise my VPS security by hosting wordpress site owned by another person?

I've my personal VPS running ubuntu server 12.04 with standard AMP stack. I want to give my client hosting. but I'm worried if I give the client wp hosting with wp-admin access that means he will be able to execute php code on my server. Being VPS that runs on single username, and apache www-data, could this lead to serious security breach? I can chmod www-data only files that reside within uploads dir. Thus disabling extra plugins and access to theme file edits. But will that be enough? Answer I think the thing to keep in mind is that Wordpress, like any other popular software is often a target for attacks. The upshot is that you could have your permissions configured in the best possible way, but if an extension is installed, and 6 months later a vulnerability is found in it, then it wont be long before your server is compromised. Unless you or your customer is prepared to diligently apply security updates, and thoroughly check all security aspects of your serv

windows server 2008 r2 - What are the implications of enabling the Recycle Bin feature in Active Directory?

An admin accidentally deleted the wrong OU and it removed several account and computer objects. The recycle bin optional feature was not enabled. We used adrestore from sysinternals to get the accounts back. To ensure this process is easier the next time we wanted to enable the Recycle Bin optional feature which is easily done as per guides and TechNet using Enable-ADOptionalFeature via PowerShell. In both PowerShell and the above link the following is mentioned. In this release of Windows Server 2008 R2, the process of enabling Active Directory Recycle Bin is irreversible. After you enable Active Directory Recycle Bin in your environment, it cannot be disabled. In theory I would always want to leave it enabled but I have hesitated until I understand the implication of what is about to happen. I have a single domain forest if it matters. What is the implication of enabled this feature? This must relate to why it is not enabled by default. Answer The main implication

remote - Debian - move partitions to a new drive

I have a XenServer6 VM with Debian Squeeze 64bit and only 1 partition /dev/xvda1 95 GB + linux partitions: Filesystem Size Used Avail Use% Mounted on /dev/xvda1 95G 63G 28G 70% / tmpfs 2.0G 0 2.0G 0% /lib/init/rw udev 2.0G 68K 2.0G 1% /dev tmpfs 2.0G 0 2.0G 0% /dev/shm I used the XenCenter to resize the available space for this VM to 300 GB. This worked. But now I need to tell the ext3 filesystem to add some space. I only found some instructions with LiveCD's etc., but my server is rental and its in a remote DataCenter, I do not really want to experiment with an remote LiveCD etc. It is a running webserver, so I need not to lose any data or partitions. I can do it at night, so there are no big troubles with reachability. The server should have a RAID (2x 1TB HDD). I have another 2 VMs on it. The question is: how can I do it without risking too much and without LiveCD's? Is there any

networking - Windows server, temporary alias name, only visible from one machine

Two weeks ago our IT department migrated a huge network folder from an older server to a newer one. For example, the older folder was reachable under \\oldserver\ourfolder and the new one under \\newserver\ourfolder Note the old server is still in use and visible in our companies network, \\oldserver\ourfolder was archived in between and deleted from the server. Now we found out that there are several files (actually CAD files in some proprietary binary format) having hardcoded references pointing to the old folder location. Since there are several of those files, we would like to write a program to change those references to the new location. Unfortunately, the CAD system's VBA interpreter will only allow this when the referenced files are visible under the old network path. Otherwise, it will stop execution with an error message So what we need here is a way to make "oldserver" an alias name for "newserver". This should be done only temporary, and only for

virtualization - Slow http response time from vm

I am currently hosting a test website under a 8 vcpu and 2gb ram virtual machine using nginx and php-fpm, the host machine is hosting other virtual machines doing the same thing with 10gbps network interface. During my stress test after 50 users requesting concurrently my test website's response time rises from 800-900ms on normal to around 2 seconds. After this test I tried to increase the virtual machine's ram to 6gb but there were no change in the response time at all. What could be causing this?

linux - proftpd on debian: super user and simple users with their folder

hi there sorry for my english im italian. Here is what i would like to do with proftpd i have a main user: webserver group: www-data must do anything in var/www/ and subfolders ( it actually works ) Then in var/www i have two folders: www.one.com www.two.com www.one.com is defaultroot for user one (group one)... is OK www.two.com is default root for user two (group two)... is OK DefaultRoot /var/www/www.one.com one DefaultRoot /var/www/www.two.com two Now the problem: all works exept user one cant write in www.one.com and user two cant write in www.two.com seems to be normal cause all folders, subfolders and files in /var/www/ are owned by user webserver and group www-data but how can i resolve? how can i give privileges to user one and user two in their ows folders? of course i dont want to set 777 to all files! root@debian:/var/www# ls -lh drwxr-sr-x 3 webserver www-data 4.0K Jul 24 18:07 one.dyndns.org drwxr-sr-x 2 webserver www-data 4.0K Jul 25 04:41 two.homepc.it -rwxr-xr-x 1 webs

domain name system - CNAME okay for primary DNS record?

I've setting up a Hosting service, as part of that service I need to automatically create DNS records in our nameservers for all the domains hosted. Currently I'm using the following template: ; ; [USER] - [DOMAIN] ; $TTL 604800 @ IN SOA [PRIMARY-NS]. [NS-ADMIN]. ( [SERIAL] ; Serial 10800 ; Refresh 3600 ; Retry 1209600 ; Expire 43200 ; Negative Cache TTL ) ; @ IN NS [NS1]. ; Nameserver @ IN NS [NS2]. ; Nameserver @ IN A [SERVER-IP] ; Primary IP * IN A [CATCH-ALL-IP] ; Catch-all IP @ IN MX 0 mail What I'm interested in knowing is, if I replace the primary A record statement with a CNAME to that server's DNS entry - will there be any adverse affects? This would make IP management on my servers far easier as I would only need to update one DNS record. @ IN CNAME [SERVER-DOMA

linux - TCP listen on any IPv6 in a block on Debian

I have a /64 block of IPv6 addresses, and I'd like to be able to start a TCP server listening on any one of them. Currently I can bind to any static IP address, but not any others. If I try to bind to an address not statically routed (by the way, I'm not sure if I'm using the right terms), I get an error message, "bind: cannot assign requested address". Here's from ifconfig: eth0 Link encap:Ethernet HWaddr 56:00:00:60:af:c6 inet addr:104.238.191.172 Bcast:104.238.191.255 Mask:255.255.254.0 inet6 addr: fe80::5400:ff:fe60:afc6/64 Scope:Link inet6 addr: 2001:19f0:6801:187:5400:ff:fe60:afc6/64 Scope:Global inet6 addr: 2001:19f0:6801:187:ea1e:eb99:13ae:d49a/128 Scope:Global UP BROADCAST RUNNING ALLMULTI MULTICAST MTU:1500 Metric:1 RX packets:1526389 errors:0 dropped:0 overruns:0 frame:0 TX packets:1622562 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000