Skip to main content

Posts

Showing posts from November, 2018

windows server 2003 - Left over domain controllers

itemprop="text"> In our Windows 2003 network we have two active domain controllers. I say active because listed in the Active Directory Sites and Services (Sites, Default-First-Site-Name, Servers) there are 4 servers listed. One of these, let's call it Server-X, has no objects associated with it and it has long been powered down, two are legit domain controllers, and the final one, let's call it Server-Y, appears as a legit DC but I am having trouble removing it. So, Server-X must go. I was already under the assumption that it was removed... so would it be safe to delete it from the AD:Sites and Services DFSN Servers list? Server-Y must also go but I'm having trouble removing it using the dcpromo wizard. This server is actually causing issues because workstations within the domain, every now and again, try to authentica

windows server 2003 - Left over domain controllers

In our Windows 2003 network we have two active domain controllers. I say active because listed in the Active Directory Sites and Services (Sites, Default-First-Site-Name, Servers) there are 4 servers listed. One of these, let's call it Server-X, has no objects associated with it and it has long been powered down, two are legit domain controllers, and the final one, let's call it Server-Y, appears as a legit DC but I am having trouble removing it. So, Server-X must go. I was already under the assumption that it was removed... so would it be safe to delete it from the AD:Sites and Services DFSN Servers list? Server-Y must also go but I'm having trouble removing it using the dcpromo wizard. This server is actually causing issues because workstations within the domain, every now and again, try to authenticate against it and get rejected. Should I just use dcpromo /forceremoval? Thank you Answer I would use /forceremoval only as a last resort, it is probably stoppi

supermicro - Possible to use SAS-2 and SAS-3 HBA in the same system?

I'm wondering if it is possible to use a SAS-2 HBA in a "SAS-3" system. I have a Supermicro X11DPH-T and have attempted to install an LSI 9201-16e. From what I understood (which may be wrong), a PCIe 2.0 card may be installed in a PCIe 3.0 slot, but the slot will negotiate down to 5GT/s. Unfortunately, even setting the PCIe slot to 5GT/s manually did not not allow the card to work in the system. It initializes other PCI devices, but when it gets to the 9201-16e, the screen displays a blinking cursor in the upper left hand corner, then about 30 seconds later, the system reboots. This will continue indefinitely until shutting down and removing the card. Is it at all possible to use a SAS-2 card in this system or is the issue strictly PCI compatibility?

supermicro - Possible to use SAS-2 and SAS-3 HBA in the same system?

I'm wondering if it is possible to use a SAS-2 HBA in a "SAS-3" system. I have a Supermicro X11DPH-T and have attempted to install an LSI 9201-16e. From what I understood (which may be wrong), a PCIe 2.0 card may be installed in a PCIe 3.0 slot, but the slot will negotiate down to 5GT/s. Unfortunately, even setting the PCIe slot to 5GT/s manually did not not allow the card to work in the system. It initializes other PCI devices, but when it gets to the 9201-16e, the screen displays a blinking cursor in the upper left hand corner, then about 30 seconds later, the system reboots. This will continue indefinitely until shutting down and removing the card. Is it at all possible to use a SAS-2 card in this system or is the issue strictly PCI compatibility?

linux - Maximum number of files in one ext3 directory while still getting acceptable performance?

itemprop="text"> I have an application writing to an ext3 directory which over time has grown to roughly three million files. Needless to say, reading the file listing of this directory is unbearably slow. I don't blame ext3. The proper solution would have been to let the application code write to sub-directories such as ./a/b/c/abc.ext rather than using only ./abc.ext . I'm changing to such a sub-directory structure and my question is simply: roughly how many files should I expect to store in one ext3 directory while still getting acceptable performance? What's your experience? Or in other words; assuming that I need to store three million files in the structure, how many levels deep should the ./a/b/c/abc.ext structure be? Obviously this is a question that cannot be answered exactly, but I

linux - Maximum number of files in one ext3 directory while still getting acceptable performance?

I have an application writing to an ext3 directory which over time has grown to roughly three million files. Needless to say, reading the file listing of this directory is unbearably slow. I don't blame ext3. The proper solution would have been to let the application code write to sub-directories such as ./a/b/c/abc.ext rather than using only ./abc.ext . I'm changing to such a sub-directory structure and my question is simply: roughly how many files should I expect to store in one ext3 directory while still getting acceptable performance? What's your experience? Or in other words; assuming that I need to store three million files in the structure, how many levels deep should the ./a/b/c/abc.ext structure be? Obviously this is a question that cannot be answered exactly, but I'm looking for a ball park estimate. Answer Provided you have a distro that supports the dir_index capability then you can easily have 200,000 files in a single directory. I'd ke

domain name system - Do CNAME records overrule A records?

itemprop="text"> I have several A records like so: Subdomain IP Address example.example.com 198.51.100.0 example.com 203.0.113.0 And a CNAME record that looks like this: Alias Destination www.example.com example.com I want to make example.com throw a 301 redirect to www.example.com. So I'd change the last A Record to: www.example.com 203.0.113.0 And swap the two URLs in the CNAME record to look like the following: example.com www.example.com Question: Is this change going to make example.example.com resolve to 203.0.113.0 instead of to 198.51.100.0? class="post-text" itemprop="text"> class="normal">Answer If you have an A record for example.foo.com then no DNS record for any other domain will af

domain name system - Do CNAME records overrule A records?

I have several A records like so: Subdomain IP Address example.example.com 198.51.100.0 example.com 203.0.113.0 And a CNAME record that looks like this: Alias Destination www.example.com example.com I want to make example.com throw a 301 redirect to www.example.com. So I'd change the last A Record to: www.example.com 203.0.113.0 And swap the two URLs in the CNAME record to look like the following: example.com www.example.com Question: Is this change going to make example.example.com resolve to 203.0.113.0 instead of to 198.51.100.0? Answer If you have an A record for example.foo.com then no DNS record for any other domain will affect that. So the answer is no. Other facts to bear in mind: You can't have a CNAME and an A record for the same fully qualified domain A CNAME is not he same as a 301 redirect. A CNAME will return the same ip address as the new domain. Your browser will go to that ip asking for the ori

domain name system - DNS: CNAME as www record

itemprop="text"> I'm considering changing the DNS www records of a domain name from an A record to a CNAME. Various questions and answers on serverfault are not as clear cut as I'd hoped. Also, many DNS checking tools like DNSsy or intoDNS have as a check the fact that the www record is not a CNAME, is an A record, pointing to a public IP address. In my case, I want to point my domain's www record to an Amazon Web Services load balancer which I can only do with a CNAME. What would be the best way to achieve that? Answer If you are concerned about this issue, you can use Route 53 and ELB together to get what you want. Create the www record as an A record, then select the "Alias" option and the interface will allow you to select an AWS-specific target to point the record to. So the ELB has to exist first, the

domain name system - DNS: CNAME as www record

I'm considering changing the DNS www records of a domain name from an A record to a CNAME. Various questions and answers on serverfault are not as clear cut as I'd hoped. Also, many DNS checking tools like DNSsy or intoDNS have as a check the fact that the www record is not a CNAME, is an A record, pointing to a public IP address. In my case, I want to point my domain's www record to an Amazon Web Services load balancer which I can only do with a CNAME. What would be the best way to achieve that? Answer If you are concerned about this issue, you can use Route 53 and ELB together to get what you want. Create the www record as an A record, then select the "Alias" option and the interface will allow you to select an AWS-specific target to point the record to. So the ELB has to exist first, then you can create the Alias to it. So you start to create a new A record in Route 53 as usual, but you click on the "Alias" option right under the host n

iis - how to install 2 simple ssl certificates on 2 web site having the same ip, in iis8?

how to add a second ssl certificate to the same server for two different web sites?. How to setup a second ip for the second web site considering that the server is balanced? more information: i've bought an ssl certificate for subdomain1.domain1.com from a CA and a second ssl certificate fom sobdomain2.domain1.com from another CA i've installed all the 2 certificates. And i've configured certificate 1 for subdomain1.domain1.com from a CA When i try to add the certificate2 for the binding of sobdomain2.domain1.com i get this error message: "At least one other site is using the same HTTPS binding and the binding is configured with a different certificate. Are you sure that you want to reuse this HTTPS binding and reassign the other site or sites to use the new certificate?" I know it is b

iis - how to install 2 simple ssl certificates on 2 web site having the same ip, in iis8?

how to add a second ssl certificate to the same server for two different web sites?. How to setup a second ip for the second web site considering that the server is balanced? more information: i've bought an ssl certificate for subdomain1.domain1.com from a CA and a second ssl certificate fom sobdomain2.domain1.com from another CA i've installed all the 2 certificates. And i've configured certificate 1 for subdomain1.domain1.com from a CA When i try to add the certificate2 for the binding of sobdomain2.domain1.com i get this error message: "At least one other site is using the same HTTPS binding and the binding is configured with a different certificate. Are you sure that you want to reuse this HTTPS binding and reassign the other site or sites to use the new certificate?" I know it is because ssl on iis is ip based,so how can i add a second ip to the server in order to make the second ssl working? The server is load balanced

solaris - Strange ZFS hidden filesystem problem

Half of my ZFS filesystems are hidden in ZFS-fuse. Here's my story: So, I love ZFS. I used it for about six months on FreeBSD, but due to it crashing the kernel during heavy inter-filesystem IO load, I tried switching to Solaris 5.10. That was good, but when I attempted to do an import of my Version 13 pool into its Version 4 version of ZFS, there were some heafty problems. It may have tried to correct the filesystem definitions, I don't know. Since that version wasn't compatible with my pool, I've now switched to Ubuntu Server 10.4. That version more than supports that of my pool, but I can only see half of my filesystems. The filesystems I can see are the same as those Solaris could see. Now, despite those filesystems not being preset in a 'zfs list' command, I can still set properties on them and I can even still moun

solaris - Strange ZFS hidden filesystem problem

Half of my ZFS filesystems are hidden in ZFS-fuse. Here's my story: So, I love ZFS. I used it for about six months on FreeBSD, but due to it crashing the kernel during heavy inter-filesystem IO load, I tried switching to Solaris 5.10. That was good, but when I attempted to do an import of my Version 13 pool into its Version 4 version of ZFS, there were some heafty problems. It may have tried to correct the filesystem definitions, I don't know. Since that version wasn't compatible with my pool, I've now switched to Ubuntu Server 10.4. That version more than supports that of my pool, but I can only see half of my filesystems. The filesystems I can see are the same as those Solaris could see. Now, despite those filesystems not being preset in a 'zfs list' command, I can still set properties on them and I can even still mount them and read and write files, but they just plain don't show up in 'zfs list'. I've mounted the major ones, but I'm not s

Swapping older Samsung Enterprise SSD for Consumer SSD (EVO 850) in Dell PowerEdge R820 server in RAID 5 configuration?

I work for a nonprofit organization that contracts with a local IT outfit. We are currently using three enterprise SSD drives in our Dell PowerEdge R820 rig, set up in a RAID 5 configuration. We have been having discussions about upping our capacity to at least 1 TB (possibly 2 TB), because we always seem to be up against our capacity limit as things stand now. The contracted IT outfit has recommended three Samsung SM863 SATA 1.92TB (Enterprise) drives to replace the current drives. Their cost on the drives was going to bring our total to around $6,000 (not including labor). Of course, you can buy these drives direct from Samsung for $1,260. This is where my mistrust of this recommendation began to grow, as a $1,000 price difference from OEM versus the IT outfit is very odd to say the least. I've done a bit of research and found that when i

Swapping older Samsung Enterprise SSD for Consumer SSD (EVO 850) in Dell PowerEdge R820 server in RAID 5 configuration?

I work for a nonprofit organization that contracts with a local IT outfit. We are currently using three enterprise SSD drives in our Dell PowerEdge R820 rig, set up in a RAID 5 configuration. We have been having discussions about upping our capacity to at least 1 TB (possibly 2 TB), because we always seem to be up against our capacity limit as things stand now. The contracted IT outfit has recommended three Samsung SM863 SATA 1.92TB (Enterprise) drives to replace the current drives. Their cost on the drives was going to bring our total to around $6,000 (not including labor). Of course, you can buy these drives direct from Samsung for $1,260. This is where my mistrust of this recommendation began to grow, as a $1,000 price difference from OEM versus the IT outfit is very odd to say the least. I've done a bit of research and found that when it comes to solid state, the enterprise drives have about the same median lifespan as the consumer level drives, like the Samsung EVO 850 1TB,

Top level domain/domain suffix for private network?

At our office, we have a local area network with a purely internal DNS setup, on which clients all named as whatever.lan . I also have a VMware environment, and on the virtual-machine-only network, I name the virtual machines whatever.vm . Currently, this network for the virtual machines isn't reachable from our local area network, but we're setting up a production network to migrate these virtual machines to, which will be reachable from the LAN. As a result, we're trying to settle on a convention for the domain suffix/TLD we apply to the guests on this new network we're setting up, but we can't come up with a good one, given that .vm , .local and .lan all have existing connotations in our environment. So, what's the best practice in this situation? Is there a list of TLDs or domain names somewhere that&#

Top level domain/domain suffix for private network?

At our office, we have a local area network with a purely internal DNS setup, on which clients all named as whatever.lan . I also have a VMware environment, and on the virtual-machine-only network, I name the virtual machines whatever.vm . Currently, this network for the virtual machines isn't reachable from our local area network, but we're setting up a production network to migrate these virtual machines to, which will be reachable from the LAN. As a result, we're trying to settle on a convention for the domain suffix/TLD we apply to the guests on this new network we're setting up, but we can't come up with a good one, given that .vm , .local and .lan all have existing connotations in our environment. So, what's the best practice in this situation? Is there a list of TLDs or domain names somewhere that's safe to use for a purely internal network?

ssl - Serving port 443 over http creates 400 Bad Request Error instead of redirect

itemprop="text"> So for posterity sake, I am trying to configure my server so that even when someone tries to go to go to http:// domain.com:443, they would be correctly redirected to the https version of the site (https:// domain.com). When testing something like http:// domain.com:443, it does not redirect correctly to https:// domain.com, I instead get hit with a 400 Bad Request page with the following content: Bad Request Your browser sent a request that this server could not understand. Reason: You're speaking plain HTTP to an SSL-enabled server port. Instead use the HTTPS scheme to access this URL, please. Apache/2.4.18 (Ubuntu) Server at sub.domain.com Port 443 I tried including the following lines in my 000-default.conf in the *:80> : RewriteEngine O

ssl - Serving port 443 over http creates 400 Bad Request Error instead of redirect

So for posterity sake, I am trying to configure my server so that even when someone tries to go to go to http:// domain.com:443, they would be correctly redirected to the https version of the site (https:// domain.com). When testing something like http:// domain.com:443, it does not redirect correctly to https:// domain.com, I instead get hit with a 400 Bad Request page with the following content: Bad Request Your browser sent a request that this server could not understand. Reason: You're speaking plain HTTP to an SSL-enabled server port. Instead use the HTTPS scheme to access this URL, please. Apache/2.4.18 (Ubuntu) Server at sub.domain.com Port 443 I tried including the following lines in my 000-default.conf in the : RewriteEngine On RewriteCond %{HTTPS} off RewriteRule (.*) https://%{SERVER_NAME}/$1 [R,L] But it didn't work. This issue occurs on all domains, subdomains and the server IP itself. Possibly related, trying to do a dry run of letsencrypt returns

hotswap - Are all SAS drives hot-swappable?

itemprop="text"> Can get a 300GB Cheetah 15K SAS 3.5 disk for quite a bit cheaper through another supplier vs. Dell's out-of-warranty depot (for an aging PowerEdge server). Specs are identical, wondering if there's any difference that I'm not seeing as far as hot-swappability (<-- just made that up). Assuming it's more of the controller/connector vs. the drive? itemprop="text"> class="normal">Answer Yes, they're hot-swappable... The drives are usually compatible and the hot-swap ability is built into the SAS specification. The href="http://en.wikipedia.org/wiki/File:SAS-drive-connector.jpg" rel="nofollow noreferrer">SFF-8482 connector on the drives is a standard and there's nothing physically unique to Dell drives versus the OEM.

hotswap - Are all SAS drives hot-swappable?

Can get a 300GB Cheetah 15K SAS 3.5 disk for quite a bit cheaper through another supplier vs. Dell's out-of-warranty depot (for an aging PowerEdge server). Specs are identical, wondering if there's any difference that I'm not seeing as far as hot-swappability (<-- just made that up). Assuming it's more of the controller/connector vs. the drive? Answer Yes, they're hot-swappable... The drives are usually compatible and the hot-swap ability is built into the SAS specification. The SFF-8482 connector on the drives is a standard and there's nothing physically unique to Dell drives versus the OEM. I'd still prefer to use the manufacturer drive part number before going OEM, though. If you dig a bit harder (eBay, refurbished, liquidators), you'll be able to find the Dell-specific drive you need at or below the cost of the OEM.

raid - HP SmartArray P400: How to repair failed logical drive?

itemprop="text"> I have a HP Server with SmartArray P400 controller (incl. 256 MB Cache/Battery Backup) with a logicaldrive with replaced failed physicaldrive that does not rebuild. This is how it looked when I detected the error: ~# /usr/sbin/hpacucli ctrl slot=0 show config Smart Array P400 in Slot 0 (Embedded) (sn: XXXX) array A (SATA, Unused Space: 0 MB) logicaldrive 1 (698.6 GB, RAID 1, OK) physicaldrive 1I:1:1 (port 1I:box 1:bay 1, SATA, 750 GB, OK) physicaldrive 1I:1:2 (port 1I:box 1:bay 2, SATA, 750 GB, OK) array B (SATA, Unused Space: 0 MB) logicaldrive 2 (2.7 TB, RAID 5, Failed) physicaldrive 1I:1:3 (port 1I:box 1:bay 3, SATA, 750 GB, OK) physicaldrive 1I:1:4 (port 1I:box 1:bay 4, SATA, 750 GB, OK) physicaldrive 2I:1:5 (port 2I:box 1:bay 5, SATA, 750 GB, OK) physicaldrive 2I:1:6 (port 2I:

raid - HP SmartArray P400: How to repair failed logical drive?

I have a HP Server with SmartArray P400 controller (incl. 256 MB Cache/Battery Backup) with a logicaldrive with replaced failed physicaldrive that does not rebuild. This is how it looked when I detected the error: ~# /usr/sbin/hpacucli ctrl slot=0 show config Smart Array P400 in Slot 0 (Embedded) (sn: XXXX) array A (SATA, Unused Space: 0 MB) logicaldrive 1 (698.6 GB, RAID 1, OK) physicaldrive 1I:1:1 (port 1I:box 1:bay 1, SATA, 750 GB, OK) physicaldrive 1I:1:2 (port 1I:box 1:bay 2, SATA, 750 GB, OK) array B (SATA, Unused Space: 0 MB) logicaldrive 2 (2.7 TB, RAID 5, Failed) physicaldrive 1I:1:3 (port 1I:box 1:bay 3, SATA, 750 GB, OK) physicaldrive 1I:1:4 (port 1I:box 1:bay 4, SATA, 750 GB, OK) physicaldrive 2I:1:5 (port 2I:box 1:bay 5, SATA, 750 GB, OK) physicaldrive 2I:1:6 (port 2I:box 1:bay 6, SATA, 750 GB, Failed) physicaldrive 2I:1:7 (port 2I:box 1:bay 7, SATA, 750 GB, OK) unassigned physicaldrive 2I:1:8 (port 2I:box 1:bay

Running Tomcat on an Apache Alias Or Map Port numbers to Hostnames

itemprop="text"> I have WAMP installed with Apache at port 80 and Tomcat installed at port 8080. So, I access my php projects from localhost/ and java projects at localhost:8080/ Can I install or map Tomcat to a better address like localhost/java/ or betterstill a pseudoname like javahost instead of localhost I have added a line to the HOSTS file 127.0.0.1:8080 javahost But that doesn't work. And I cannot ping to javahost. I guess its not supposed to work that way. Is there a way out? WAMPSERVER 2 with APACHE 2.2.11 TOMCAT 6.0.29 WINDOWS XP PRO SP3 Update: Thanks to @bindbn I changed the hosts file to 127.0.0.1 javahost I enabled proxy module in Apache Then added this to the end of the httpd.conf javahost> ProxyPre

Running Tomcat on an Apache Alias Or Map Port numbers to Hostnames

I have WAMP installed with Apache at port 80 and Tomcat installed at port 8080. So, I access my php projects from localhost/ and java projects at localhost:8080/ Can I install or map Tomcat to a better address like localhost/java/ or betterstill a pseudoname like javahost instead of localhost I have added a line to the HOSTS file 127.0.0.1:8080 javahost But that doesn't work. And I cannot ping to javahost. I guess its not supposed to work that way. Is there a way out? WAMPSERVER 2 with APACHE 2.2.11 TOMCAT 6.0.29 WINDOWS XP PRO SP3 Update: Thanks to @bindbn I changed the hosts file to 127.0.0.1 javahost I enabled proxy module in Apache Then added this to the end of the httpd.conf ProxyPreserveHost On ProxyPass / http://localhost:8080 ProxyPassReverse / http://localhost:8080 From using Virtualhost & mod_proxy together Following which javahost also loads the php website hosted at port 80 instead of the localhost:8080 website. Update Found this on the interw

VMware ESXi 3.5 HP Proliant DL380 G4 reduce fan noise

itemprop="text"> I've set up VMware ESXi v3.5 on an HP Proliant DL380 G4, and I'd like a way to reduce the fan noise. Under Health status, I see the following: alt="ESXi screenshot"> Is this normal? I've read about there being a Fan component as well. Is there a firmware/BIOS update that would help? I've found HP Management Agents for VMware ESX Server 3.x on the HP website, but it appears to not work for ESXi. Do I have to buy ESX? (Where/how/$?) If I enable ssh, can I install the agent? How? Answer You don't have any options for ESXi 3.5 . The fan control on that model is managed via the HP Management Agents, but those are not available for ESXi 3.5. ESXi versions 4 and 5 have agents available as add-ons or with HP's specific ESXi

VMware ESXi 3.5 HP Proliant DL380 G4 reduce fan noise

I've set up VMware ESXi v3.5 on an HP Proliant DL380 G4, and I'd like a way to reduce the fan noise. Under Health status, I see the following: Is this normal? I've read about there being a Fan component as well. Is there a firmware/BIOS update that would help? I've found HP Management Agents for VMware ESX Server 3.x on the HP website, but it appears to not work for ESXi. Do I have to buy ESX? (Where/how/$?) If I enable ssh, can I install the agent? How? Answer You don't have any options for ESXi 3.5 . The fan control on that model is managed via the HP Management Agents, but those are not available for ESXi 3.5. ESXi versions 4 and 5 have agents available as add-ons or with HP's specific ESXi builds. It's an old server, though. If this is a new installation, please use one of the current builds of VMWare. ESXi 4 will work on that machine.

domain name system - Mailgun mails bouncing, possible DNS records are wrong?

itemprop="text"> This has probably been asked and answered before, but I'm a bit lost because I don't know what's happening and therefor don't know what to look for. I would not only like a solution of course, but I would also like to understand what's happening. I have a technical background, but in software development. Servers, DNS records, etc is a bit new to me (although I've managed). I'm running a web application on shared hosting. I have access to a Plesk control panel (I believe 12.5). The domain name is registered at another company. And for sending mails, I'm using Mailgun (calling their API). Now, some mails bounce (others don't), with messages like: Sender address rejected: Domain not found sorry, your domain does not exists. When I use MXToolbox, an

domain name system - Mailgun mails bouncing, possible DNS records are wrong?

This has probably been asked and answered before, but I'm a bit lost because I don't know what's happening and therefor don't know what to look for. I would not only like a solution of course, but I would also like to understand what's happening. I have a technical background, but in software development. Servers, DNS records, etc is a bit new to me (although I've managed). I'm running a web application on shared hosting. I have access to a Plesk control panel (I believe 12.5). The domain name is registered at another company. And for sending mails, I'm using Mailgun (calling their API). Now, some mails bounce (others don't), with messages like: Sender address rejected: Domain not found sorry, your domain does not exists. When I use MXToolbox, an MX Lookup looks fine. But when I test the email server (with MXToolbox), I see the following messages: Reverse DNS does not match SMTP Banner Warning - Does not support TLS I don't think the second is a

Centos Xen resizing DomU partition and volume group

I have a setup like so: Dom0 LV | DomU Physical Disk | | XVDA1 XVDA2 (/boot) (DomU PV) | VolGroup00 (DomU VG) | | LogVol00 LogVol01 (swap) (/) I am trying to resize the DomU root Filesystem. (VolGroup00-LogVol01) I realize that I now need to resize the partition XVDA2, however when I try doing this with parted on Dom0 it just tells me "Error: Could not detect file system." So to resize the root part VolGroup-LogVol00 shouldn't the process be: # Shut down DomU xm shutdown domU #Resize Dom0 Logical volume lvextend -L+2G /dev/volumes/domU-vol # Parted parted /dev/volumes/domU-vol # Resize root partition resize 2 START END (This is where I get an error) "Error: Could not detect file system." # add the vm volume group to Dom0 lvm kpartx -a /dev

Centos Xen resizing DomU partition and volume group

I have a setup like so: Dom0 LV | DomU Physical Disk | | XVDA1 XVDA2 (/boot) (DomU PV) | VolGroup00 (DomU VG) | | LogVol00 LogVol01 (swap) (/) I am trying to resize the DomU root Filesystem. (VolGroup00-LogVol01) I realize that I now need to resize the partition XVDA2, however when I try doing this with parted on Dom0 it just tells me "Error: Could not detect file system." So to resize the root part VolGroup-LogVol00 shouldn't the process be: # Shut down DomU xm shutdown domU #Resize Dom0 Logical volume lvextend -L+2G /dev/volumes/domU-vol # Parted parted /dev/volumes/domU-vol # Resize root partition resize 2 START END (This is where I get an error) "Error: Could not detect file system." # add the vm volume group to Dom0 lvm kpartx -a /dev/volumes/domU-vol # resize the domU PV pvresize /dev/mapper/domU-pl (as listed in pvdisplay) # The

502 bad gateway nginx and apache servers

itemprop="text"> Hello there can you please help me im trying to setup apache and nginx server on my ubuntu 16.04 server, but when i try to visit localhost/info.php fo example im getting 502 Bad Gateway nginx/1.9.15 (Ubuntu) error 502 bad gateway and this is from the error log file: 2017/01/24 10:50:55 [error] 14774#14774: *29 connect() failed (111: Connection refused) while connecting to upstream, client: 127.0.0.1, server: example.com, request: "GET /info.php HTTP/1.1", upstream: " rel="nofollow noreferrer">http://127.0.0.1:8080/info.php ", host: "localhost" my /etc/apache2/ports.conf is: Listen 8000 Listen 443 mod_gnutls.c> Listen 443 my /etc/nginx/sites-available/default file

502 bad gateway nginx and apache servers

Hello there can you please help me im trying to setup apache and nginx server on my ubuntu 16.04 server, but when i try to visit localhost/info.php fo example im getting 502 Bad Gateway nginx/1.9.15 (Ubuntu) error 502 bad gateway and this is from the error log file: 2017/01/24 10:50:55 [error] 14774#14774: *29 connect() failed (111: Connection refused) while connecting to upstream, client: 127.0.0.1, server: example.com, request: "GET /info.php HTTP/1.1", upstream: " http://127.0.0.1:8080/info.php ", host: "localhost" my /etc/apache2/ports.conf is: Listen 8000 Listen 443 Listen 443 my /etc/nginx/sites-available/default file is: server { listen 80 default_server; listen [::]:80 default_server; root /var/www/html; # Add index.php to the list if you are using PHP index index.php index.html index.htm index.nginx-debian.html; server_name 127.0.0.1; location / { # First attempt to serve request as file, then