Skip to main content

Posts

Showing posts from August, 2019

centos7 - Mount USB Drive on Esxi 6.5 Host

I have ESXI 6.5 (No Center / Sphere), Installed Centos 7 as guest VM. I want to mount my usb drive in centos 7 to share it with samba on network. But ESXI doesn't seems to work in my satiation [root@271:/vmfs/volumes] lsusb Bus 001 Device 003: ID 03f0:7d40 Hewlett-Packard Bus 003 Device 002: ID 8087:0024 Intel Corp. Integrated Rate Matching Hub Bus 002 Device 002: ID 8087:0024 Intel Corp. Integrated Rate Matching Hub Bus 001 Device 002: ID 1058:1042 Western Digital Technologies, Inc. Bus 003 Device 001: ID 0e0f:8002 VMware, Inc. Bus 002 Device 001: ID 0e0f:8002 VMware, Inc. Bus 001 Device 001: ID 0e0f:8003 VMware, Inc. Here Hewlett-Packard and Western Digital Technologies, Inc. are my usb and drives. ESXI WEB UI is greyed out on usb option href="https://i.stack.imgur.com/8fpmS.png"

centos7 - Mount USB Drive on Esxi 6.5 Host

I have ESXI 6.5 (No Center / Sphere), Installed Centos 7 as guest VM. I want to mount my usb drive in centos 7 to share it with samba on network. But ESXI doesn't seems to work in my satiation [root@271:/vmfs/volumes] lsusb Bus 001 Device 003: ID 03f0:7d40 Hewlett-Packard Bus 003 Device 002: ID 8087:0024 Intel Corp. Integrated Rate Matching Hub Bus 002 Device 002: ID 8087:0024 Intel Corp. Integrated Rate Matching Hub Bus 001 Device 002: ID 1058:1042 Western Digital Technologies, Inc. Bus 003 Device 001: ID 0e0f:8002 VMware, Inc. Bus 002 Device 001: ID 0e0f:8002 VMware, Inc. Bus 001 Device 001: ID 0e0f:8003 VMware, Inc. Here Hewlett-Packard and Western Digital Technologies, Inc. are my usb and drives. ESXI WEB UI is greyed out on usb option Please help me out to mount these drive in centos VM .

httpd - Failed to restart Apache server

I'm trying to restart Apache through Plesk. When I click "Restart", i'm getting the following message. Error: Unable to make action: Unable to manage service by apache_control_adapter: Service /etc/init.d/httpd failed to start ('--start', 'web') I also tried(thanks to paulsm4 from stackoverflow to do it through the terminal. I typed sudo /etc/init.d/httpd start and the error i got is: Starting httpd: (98)Address already in use: make_sock: could not bind to address [::]:80 (98)Address already in use: make_sock: could not bind to address 0.0.0.0:80 no listening sockets available, shutting down Unable to open logs [FAILED] Thank you in advance!

httpd - Failed to restart Apache server

I'm trying to restart Apache through Plesk. When I click "Restart", i'm getting the following message. Error: Unable to make action: Unable to manage service by apache_control_adapter: Service /etc/init.d/httpd failed to start ('--start', 'web') I also tried(thanks to paulsm4 from stackoverflow to do it through the terminal. I typed sudo /etc/init.d/httpd start and the error i got is: Starting httpd: (98)Address already in use: make_sock: could not bind to address [::]:80 (98)Address already in use: make_sock: could not bind to address 0.0.0.0:80 no listening sockets available, shutting down Unable to open logs [FAILED] Thank you in advance!

switch - gigabit ethernet to single mode LC fiber

itemprop="text"> I need a fiber converter that takes single mode LC fiber from the patch to a gigabit port on the switch. I've been investigating the Canary GFT-1036, but am having a difficult time finding a vendor my university can purchase from. Are there any alternative products that anyone has/is using to offer up some ideas? class="post-text" itemprop="text"> class="normal">Answer Copper to LC single-mode is a pretty common combination. Easiest approach, particularly if you're limited in vendors, would be to get a switch with SFP ports and a suitable single-mode SFP module. (There are applications where you want a more transparent solution; adding more switches can sometimes be a problem. In that case, then you'll want a media converter -- I'd recommend one that takes SFPs

switch - gigabit ethernet to single mode LC fiber

I need a fiber converter that takes single mode LC fiber from the patch to a gigabit port on the switch. I've been investigating the Canary GFT-1036, but am having a difficult time finding a vendor my university can purchase from. Are there any alternative products that anyone has/is using to offer up some ideas? Answer Copper to LC single-mode is a pretty common combination. Easiest approach, particularly if you're limited in vendors, would be to get a switch with SFP ports and a suitable single-mode SFP module. (There are applications where you want a more transparent solution; adding more switches can sometimes be a problem. In that case, then you'll want a media converter -- I'd recommend one that takes SFPs, as you get more flexibility to switch or replace optics.)

windows server 2003 - How can I prevent an unintentional DDOS running ColdFusion 8 with IIS 6?

We had an interesting outage today on one of our client's websites. Out of nowhere, the website was inaccessible. The website runs by itself on a dedicated physical Windows 2003 R2 server (probably overkill, I know, but that's a discussion for a different day). After restarting IIS and ColdFusion Application Service, the problem came back several times. My initial thought was that it was a DNS issue, which happens occasionally - the last time it happened was after Hurricane Sandy when we our ISP was out, and we had to make some network config changes. But, it was not a DNS issue. My second thought was that it was a DDOS attack, but, there's very little reason anyone would want to take this site down. When we called our ISP, the operator on the other end noted that traffic was spiking significantly. As it turned out, the client had unintentionally caused a

windows server 2003 - How can I prevent an unintentional DDOS running ColdFusion 8 with IIS 6?

We had an interesting outage today on one of our client's websites. Out of nowhere, the website was inaccessible. The website runs by itself on a dedicated physical Windows 2003 R2 server (probably overkill, I know, but that's a discussion for a different day). After restarting IIS and ColdFusion Application Service, the problem came back several times. My initial thought was that it was a DNS issue, which happens occasionally - the last time it happened was after Hurricane Sandy when we our ISP was out, and we had to make some network config changes. But, it was not a DNS issue. My second thought was that it was a DDOS attack, but, there's very little reason anyone would want to take this site down. When we called our ISP, the operator on the other end noted that traffic was spiking significantly. As it turned out, the client had unintentionally caused a DDOS on the website, after they FTPed a very large video file, and then mass emailed a link to it. Hundreds of people cl

memory difference on top and htop

itemprop="text"> I got a new VPS as my database server. I installed only mysql and started. After some time(even after shutting down mysql service), I see only %3-4 of the memory used in htop but according to top I only have 30MB free memory. It has a total of 4GB RAM. I don't know which one to trust. Can someone explain me the difference of top and htop memory usage and what may be causing the high usage at top stats? thanks. Answer It's just the difference of whether you consider memory that contains discardable data as used or not. The memory is used in the sense that it contains information that may be useful. But it's free in the sense that the information can simply be discarded if the memory is needed. For example, say you run a program. The executable file that holds the program itself is

memory difference on top and htop

I got a new VPS as my database server. I installed only mysql and started. After some time(even after shutting down mysql service), I see only %3-4 of the memory used in htop but according to top I only have 30MB free memory. It has a total of 4GB RAM. I don't know which one to trust. Can someone explain me the difference of top and htop memory usage and what may be causing the high usage at top stats? thanks. Answer It's just the difference of whether you consider memory that contains discardable data as used or not. The memory is used in the sense that it contains information that may be useful. But it's free in the sense that the information can simply be discarded if the memory is needed. For example, say you run a program. The executable file that holds the program itself is still in memory. But that data is not needed at the time. If the program runs again, however, the information can be used from memory so it doesn't have to be loaded from disk aga

unable to establish remote connection to mariadb (mysql) on centos7 on gcloud from mac or gscipt

newbie here - Issue: unable to establish remote connection to mariadb (mysql) on centos7 from a mac or a gscript - setup: I got 2 lamp machines: one test (107) and another one production (35) 107 - centos6.5 with mysql (vps in digitial ocean) 35 - centos7 with mariadb-server-5.5.52-1.el7.x86_64 (compute engine on gcloud) db,dbuser,dbpass and port (3306) same on both machines problem details: I can establish a remote and a jdbc connection to mysql on 107 (test server) from a mac using nc, mysqlworkbench and a gscript app; but when I try to get a connection going with 35 (prod server) with the same user/pass/db parameters I get this response: workbench: Can't connect to MySQL server on '35.190.134.164' (60) gscript (jdbc connection exception thrown): Failed to establish a database connection. Check connection string, username and pass

unable to establish remote connection to mariadb (mysql) on centos7 on gcloud from mac or gscipt

newbie here - Issue: unable to establish remote connection to mariadb (mysql) on centos7 from a mac or a gscript - setup: I got 2 lamp machines: one test (107) and another one production (35) 107 - centos6.5 with mysql (vps in digitial ocean) 35 - centos7 with mariadb-server-5.5.52-1.el7.x86_64 (compute engine on gcloud) db,dbuser,dbpass and port (3306) same on both machines problem details: I can establish a remote and a jdbc connection to mysql on 107 (test server) from a mac using nc, mysqlworkbench and a gscript app; but when I try to get a connection going with 35 (prod server) with the same user/pass/db parameters I get this response: workbench: Can't connect to MySQL server on '35.190.134.164' (60) gscript (jdbc connection exception thrown): Failed to establish a database connection. Check connection string, username and password. nc on mac: ... (literally nothing after issuing the command nc <35's ip address> 3306 -What I tried: 1. https://mariadb.com/kb/e

hp proliant - HP DL580 HDD Fan Issue

I recently obtained an HP ProLiant DL580 G7 without any HDDs so I went ahead and installed an SSD which works great and the other day I ordered a 2.5" SATA 2 TB HDD (ST2000LMB15). After installing the new SATA drive, one of the temperature sensors on the servers (sensor 17) has seemingly jumped up to 47 degrees Celsius, the exact temperature necessary for the fans to increase their speed in an endless cycle until they hit their maximum speed (this is loud, WAY too loud!). After removing the drive, the sensor magically returns to its normal 35 degrees Celsius temperature and the fan speeds begin to return to their normal speeds. I have found a href="https://community.hpe.com/t5/ProLiant-Servers-ML-DL-SL/DL580-G7-overheat-temperature-sensor-17/m-p/6912986/highlight/true#M155369" rel="nofollow noreferrer">thread where two differen

hp proliant - HP DL580 HDD Fan Issue

I recently obtained an HP ProLiant DL580 G7 without any HDDs so I went ahead and installed an SSD which works great and the other day I ordered a 2.5" SATA 2 TB HDD (ST2000LMB15). After installing the new SATA drive, one of the temperature sensors on the servers (sensor 17) has seemingly jumped up to 47 degrees Celsius, the exact temperature necessary for the fans to increase their speed in an endless cycle until they hit their maximum speed (this is loud, WAY too loud!). After removing the drive, the sensor magically returns to its normal 35 degrees Celsius temperature and the fan speeds begin to return to their normal speeds. I have found a thread where two different people seem to run into this same issue but no solution was ever posted. At this point, I'm assuming that the HDD is reporting incompatible or incorrect data to the server and ultimately causing this annoying issue. Right now, the only solution I see is to replace the drive with an enterprise SAS drive, but I w

amazon ec2 - EC2 retirement - Meteor / MongoDB

I am hosting my Meteor Application on an Amazon EC2 instance. Today I got a mail that my instance is scheduled for retirement until the end of month. I don't have any experience with that and what are the best ways to handle the situation. My root device type is ebs. Amazon suggests: "We recommend that you launch replacement instances and start migrating to them." I already created an AMI Image of my instance and launched it. Unfortunatly my app can't be reached. At the moment the running instance can be reached via web and ssh. Is there any best practice? Thanks in advance

amazon ec2 - EC2 retirement - Meteor / MongoDB

I am hosting my Meteor Application on an Amazon EC2 instance. Today I got a mail that my instance is scheduled for retirement until the end of month. I don't have any experience with that and what are the best ways to handle the situation. My root device type is ebs. Amazon suggests: "We recommend that you launch replacement instances and start migrating to them." I already created an AMI Image of my instance and launched it. Unfortunatly my app can't be reached. At the moment the running instance can be reached via web and ssh. Is there any best practice? Thanks in advance

WebDAV Security and Hardening

itemprop="text"> What are the security ramifications that one should be aware of when considering using href="https://en.wikipedia.org/wiki/WebDAV" rel="nofollow noreferrer">WebDAV ? How does one go about securing it? What else should I know about it? itemprop="text"> class="normal">Answer WebDav by its self doesn't have any security. It'll let anyone touch anything. It says in the standards docs that this should be handled in the web-server layer (or application, if that's providing the WebDAV service). Authentication />WebDAV has no native auth service, so one needs to be put in front of it. Different webservers handle this differently, depending on what dav module you're using. Server-specific modules (mod_dav) will behave differently

WebDAV Security and Hardening

What are the security ramifications that one should be aware of when considering using WebDAV ? How does one go about securing it? What else should I know about it? Answer WebDav by its self doesn't have any security. It'll let anyone touch anything. It says in the standards docs that this should be handled in the web-server layer (or application, if that's providing the WebDAV service). Authentication WebDAV has no native auth service, so one needs to be put in front of it. Different webservers handle this differently, depending on what dav module you're using. Server-specific modules (mod_dav) will behave differently than those that are based out of app-servers like Tomcat). This is the normal HTTP auth stuff; basic, digest, SASL, Kerberos, etc. HTTPS Since the authentication won't be encrypted without it (unless you're doing IIS-based webdav and NTLM), and the files won't be transferred encrypted. Local Auth Depending on what's driving

linux - multiple IP addresses bound - which one is used for outgoing packets?

itemprop="text"> Using IP aliasing a linux box has bound multiple ip addresses from the same subnet on the same NIC. So ifconfig shows up device eth0 with , eth0:1 with and eth0:2 with . How does Linux determine the IP source address used for outgoing ip traffic? Is there a way to define what source IP address certain outgoing traffic should use? itemprop="text"> class="normal">Answer I'm not sure why he posted his answer as a comment, but David Schwartz is correct, you should set a netmask of 255.255.255.255 on the 'secondary' addresses, and you should not set gateway addresses on them. You put the correct netmask on the main address, and give it the correct gateway address.

linux - multiple IP addresses bound - which one is used for outgoing packets?

Using IP aliasing a linux box has bound multiple ip addresses from the same subnet on the same NIC. So ifconfig shows up device eth0 with , eth0:1 with and eth0:2 with . How does Linux determine the IP source address used for outgoing ip traffic? Is there a way to define what source IP address certain outgoing traffic should use? Answer I'm not sure why he posted his answer as a comment, but David Schwartz is correct, you should set a netmask of 255.255.255.255 on the 'secondary' addresses, and you should not set gateway addresses on them. You put the correct netmask on the main address, and give it the correct gateway address.

router - ubuntu server refusing connections via port forwarding

Getting some really weird behavior from our Ubuntu server... it's behind a Verizon router firewall with port forwarding (port 8080 to port 80 on the server), and we've been having issues accessing it via this external IP. From within the network, it appears to respond normally (I can access it via web browser and SSH), but refuses connections through port forwarding (using our static external IP). The strangest thing is that it actually responds to external port-forwarded connections right after being restarted, but quickly lapses back into this pattern of refusing external connections. I'm a bit of a server newbie (I'm actually a programmer in a small startup that just lost their server ops guy, urgh) so this is all trial by fire for me. Does anyone have any advice on what could be going wrong here? Any help would be appreciated, thanks. E

router - ubuntu server refusing connections via port forwarding

Getting some really weird behavior from our Ubuntu server... it's behind a Verizon router firewall with port forwarding (port 8080 to port 80 on the server), and we've been having issues accessing it via this external IP. From within the network, it appears to respond normally (I can access it via web browser and SSH), but refuses connections through port forwarding (using our static external IP). The strangest thing is that it actually responds to external port-forwarded connections right after being restarted, but quickly lapses back into this pattern of refusing external connections. I'm a bit of a server newbie (I'm actually a programmer in a small startup that just lost their server ops guy, urgh) so this is all trial by fire for me. Does anyone have any advice on what could be going wrong here? Any help would be appreciated, thanks. EDIT: We have another server being forwarded on port 80, and it hasn't had any accessibility problems. So now I'm beginning t

active directory - Remote branches and domain controller

itemprop="text"> I have a very small, but distributed network. In the central office, there's a Windows Server 2008 R2 VM with a few Linux VMs running on the same box. There are two client PCs running Windows7. In a remote location, there is a single client PC, currently connecting into the central office with OpenVPN via one of the Linux servers. I would like to move from Workgroup to Administrative Domain for better group policy control. I will not be able to justify additional server hardware or Microsoft licenses (those things are ridiculous) but can easily add more VMs to the existing server. The way I see it I have a few decisions to make, each with a few options. Which server runs the domain Windows Server 2008 Traditional AD solution Can't add a backup controller without another l

active directory - Remote branches and domain controller

I have a very small, but distributed network. In the central office, there's a Windows Server 2008 R2 VM with a few Linux VMs running on the same box. There are two client PCs running Windows7. In a remote location, there is a single client PC, currently connecting into the central office with OpenVPN via one of the Linux servers. I would like to move from Workgroup to Administrative Domain for better group policy control. I will not be able to justify additional server hardware or Microsoft licenses (those things are ridiculous) but can easily add more VMs to the existing server. The way I see it I have a few decisions to make, each with a few options. Which server runs the domain Windows Server 2008 Traditional AD solution Can't add a backup controller without another license Windows server is also running FTP and AS functions; AD servers typically just host AD. One of the Linux boxes with Samba) Not the traditional AD solution (am I giving up any features?) Can easil

Digitalocean, Wordpress and Zoho SMTP

I have set up droplet on Digital Ocean (Ubuntu, nginx, php 5.6 and mysql) and I plan on running a Wordpress site. The site has a register/login/notification feature that will require sending out emails occasionally, max 100 emails per day. Is it possible to use the free Zoho SMTP service for this? I know it should work in theory, I'm interested in practical experience.

Digitalocean, Wordpress and Zoho SMTP

I have set up droplet on Digital Ocean (Ubuntu, nginx, php 5.6 and mysql) and I plan on running a Wordpress site. The site has a register/login/notification feature that will require sending out emails occasionally, max 100 emails per day. Is it possible to use the free Zoho SMTP service for this? I know it should work in theory, I'm interested in practical experience.

hyper v - Wake on Lan for Dell Poweredge 2950?

itemprop="text"> I'm looking into the possibility of using Wake On Lan for a machine used in a test lab to run VMs. It doesn't really need to be on all the time, so it would be nice to be able to power it down when it's not being used to save electricity. Looking through Dell's href="http://support.dell.com/support/edocs/systems/pe2950/en/engbrief/wol.pdf" rel="nofollow noreferrer">WoL support documentation , the NIC (Dual Embedded Broadcom® NetXtreme II 5708 Gigabit Ethernet NIC) should supposedly support WoL. The doc is a little old, though, and doesn't discuss Server 2k8, much less 2k8 R2. Nevertheless, one of the steps in the Server 2k3 configuration details specifies that hibernation support should be enabled; however, when going into the power options for my server, there are no hibe

hyper v - Wake on Lan for Dell Poweredge 2950?

I'm looking into the possibility of using Wake On Lan for a machine used in a test lab to run VMs. It doesn't really need to be on all the time, so it would be nice to be able to power it down when it's not being used to save electricity. Looking through Dell's WoL support documentation , the NIC (Dual Embedded Broadcom® NetXtreme II 5708 Gigabit Ethernet NIC) should supposedly support WoL. The doc is a little old, though, and doesn't discuss Server 2k8, much less 2k8 R2. Nevertheless, one of the steps in the Server 2k3 configuration details specifies that hibernation support should be enabled; however, when going into the power options for my server, there are no hibernation configuration options in the gui. I should note that my understanding of WoL is fairly weak. Do systems monitoring the network for magic packets need to be explicitly hibernated, and not shut off as normal? I wouldn't think that the way the OS shut down the system would affect the motherboa

permissions - Maintaining Linux file owner and group info in a multi-user system

itemprop="text"> I have a web application running on Linux (using CentOS 6) using a generic non-root user name, say app1. I've associated all the files under the folder /home/app1 with that user/group (using chown -R app1:app1) so it can serve up web pages and write to logs as necessary. When it comes to updates though I'm trying to figure out how to handle permissions so that I don't need to constantly run the chown command on the /home/app1 directory. There's a requirement to log in to the server with a unique id so if devguy1 logs in and copies an update the files he wrote over now have devguy1 as the owner and group and the app1 won't be able to read the new file. Devguy1 is part of the app1 group so they can update the app but not vice versa. I see that there's a way to copy files using cp -p that will preserve per

permissions - Maintaining Linux file owner and group info in a multi-user system

I have a web application running on Linux (using CentOS 6) using a generic non-root user name, say app1. I've associated all the files under the folder /home/app1 with that user/group (using chown -R app1:app1) so it can serve up web pages and write to logs as necessary. When it comes to updates though I'm trying to figure out how to handle permissions so that I don't need to constantly run the chown command on the /home/app1 directory. There's a requirement to log in to the server with a unique id so if devguy1 logs in and copies an update the files he wrote over now have devguy1 as the owner and group and the app1 won't be able to read the new file. Devguy1 is part of the app1 group so they can update the app but not vice versa. I see that there's a way to copy files using cp -p that will preserve permissions but we're usually using Beyond Compare to move updates from our Dev server to production which doesn't have that option. Is there a setting

using seperate mail server to send website emails, cannot send email to mailboxes on the same domain

itemprop="text"> I've configured a new WordPress website to send emails from no-reply@domain.com for things like user registrations, password resets and newsletters. The email info@domain.com is the only other mailbox for this domain which is what the site's owner will use for enquiries and personal outgoing mail. info@domain.com uses Namecheap's private email hosting, while no-reply@domain.com uses my own iRedMail mail server. I am able to receive emails at info@domain.com and send from no-reply@domain.com without issues, however if I try send an email from no-reply@domain.com to info@domain.com , I receive the following error: 5.1.1 : Recipient address rejected: User unknown in virtual mailbox table. Please check the message recipient info@domain.com and try

using seperate mail server to send website emails, cannot send email to mailboxes on the same domain

I've configured a new WordPress website to send emails from no-reply@domain.com for things like user registrations, password resets and newsletters. The email info@domain.com is the only other mailbox for this domain which is what the site's owner will use for enquiries and personal outgoing mail. info@domain.com uses Namecheap's private email hosting, while no-reply@domain.com uses my own iRedMail mail server. I am able to receive emails at info@domain.com and send from no-reply@domain.com without issues, however if I try send an email from no-reply@domain.com to info@domain.com , I receive the following error: 5.1.1 : Recipient address rejected: User unknown in virtual mailbox table. Please check the message recipient info@domain.com and try again. What can I do about this? I have tried creating a info@domain.com user on the iRedMail server, however emails sent from no-reply@domain.com go to the inbox on the iRedMail server rather than the intended Namecheap in

domain name system - Dynamic subdomain routing

I asked this question over at stackoverflow, but got very few views: href="https://stackoverflow.com/questions/2284917/route-web-requests-to-different-servers-based-on-subdomain">https://stackoverflow.com/questions/2284917/route-web-requests-to-different-servers-based-on-subdomain Perhaps it's more applicable to this crowd. Here it is again for convenience: I have a platform where a user can create a new website using a subdomain. There will be thousands of these, eg abc.mydomain.com, def.mydomain.com . Hopefully if we are successful hundreds of thousands. I need to be able to route these domains to a different IPs to point at a particular app server. I have this mapping in a database right now. What are the best practices and recommended technologies here? I see a couple options: Have DNS setup with a w

domain name system - Dynamic subdomain routing

I asked this question over at stackoverflow, but got very few views: https://stackoverflow.com/questions/2284917/route-web-requests-to-different-servers-based-on-subdomain Perhaps it's more applicable to this crowd. Here it is again for convenience: I have a platform where a user can create a new website using a subdomain. There will be thousands of these, eg abc.mydomain.com, def.mydomain.com . Hopefully if we are successful hundreds of thousands. I need to be able to route these domains to a different IPs to point at a particular app server. I have this mapping in a database right now. What are the best practices and recommended technologies here? I see a couple options: Have DNS setup with a wildcard CNAME entry so that all requests go to a single IP where perhaps two machines using heartbeat (for failover) know how to look up the IP in the database and then do an http redirect to the appropriate app server. This seems clunky and slow to me. Run my own DNS server that can be pr

How does a domain host itself authoritatively?

Suppose that I want to have custom nameservers that authoritatively host mydomain.com : ns1.mydomain.com and ns2.mydomain.com . They would need to be resolvable somehow for the rest of the internet to be able to use them, but how is that possible when the nameserver records are rooted under the domain that they're authoritative for?

How does a domain host itself authoritatively?

Suppose that I want to have custom nameservers that authoritatively host mydomain.com : ns1.mydomain.com and ns2.mydomain.com . They would need to be resolvable somehow for the rest of the internet to be able to use them, but how is that possible when the nameserver records are rooted under the domain that they're authoritative for?

linux - Does lshw list the "factory" speed of a memory module or the effective speed and how to find the former?

I hope I phrased this correctly. lshw gives: description: DIMM Synchronous 400 MHz (2.5 ns) product: M378B5773CH0-CH9 vendor: Samsung physical id: 0 slot: DIMM0 size: 2GiB width: 64 bits clock: 400MHz (2.5ns) And indeed the memory speed is set is set to 800MHz in the BIOS, which I think makes sense since it is a double rate. On the other hand, Googling strongly suggests that to this product number corresponds the PC3-10600 type, which is 1333MHz, not 800MHz. And this seems to be confirmed in the BIOS, where if I select Auto for memory bus speed, 1333MHz is selected "based on SPD settings". However in the latter case, the computer does not boot, i.e. the kernel panics, complaining that something attempted to kill the Idle process. So, I am I am beginning to suspect that I have been given defective memory,

linux - Does lshw list the "factory" speed of a memory module or the effective speed and how to find the former?

I hope I phrased this correctly. lshw gives: description: DIMM Synchronous 400 MHz (2.5 ns) product: M378B5773CH0-CH9 vendor: Samsung physical id: 0 slot: DIMM0 size: 2GiB width: 64 bits clock: 400MHz (2.5ns) And indeed the memory speed is set is set to 800MHz in the BIOS, which I think makes sense since it is a double rate. On the other hand, Googling strongly suggests that to this product number corresponds the PC3-10600 type, which is 1333MHz, not 800MHz. And this seems to be confirmed in the BIOS, where if I select Auto for memory bus speed, 1333MHz is selected "based on SPD settings". However in the latter case, the computer does not boot, i.e. the kernel panics, complaining that something attempted to kill the Idle process. So, I am I am beginning to suspect that I have been given defective memory, the technician that installed saw this, and lowered the bus speed. Is this a possibility? NEW DEVELOPM

exchange 2007 - Why can I send an email to an external email address from outside of the server but not inside?

itemprop="text"> I have an email address that is a forwarded address hosted on GoDaddy. It forwards to an email address on my Exchange 2007 Server. My clients have no problem sending to it and if I use my own G-Mail account I have no problems. The strange thing is that anyone on our domain cannot send an email to that address without receiving this message: Delivery has failed to these recipients or distribution lists: info@l****e.com The recipient's e-mail address was not found in the recipient's e-mail system. Microsoft Exchange will not try to redeliver this message for you. Please check the e-mail address and try resending this message, or provide the following diagnostic text to your system administrator. /> Sent by Microsoft Exchange Server 2007 There was more to the error

exchange 2007 - Why can I send an email to an external email address from outside of the server but not inside?

I have an email address that is a forwarded address hosted on GoDaddy. It forwards to an email address on my Exchange 2007 Server. My clients have no problem sending to it and if I use my own G-Mail account I have no problems. The strange thing is that anyone on our domain cannot send an email to that address without receiving this message: Delivery has failed to these recipients or distribution lists: info@l****e.com The recipient's e-mail address was not found in the recipient's e-mail system. Microsoft Exchange will not try to redeliver this message for you. Please check the e-mail address and try resending this message, or provide the following diagnostic text to your system administrator. Sent by Microsoft Exchange Server 2007 There was more to the error message but it's pretty long so I didn't post it. I can if you need me to. Answer Use the message tracking center to follow the message. I'm guessing that your Exchange server is

place php errors in log file

I am running mac 10.6.4 on an iMac and am using it as a developer server. I have Apache and Entropy php5 installed, when i write my applications, some pages wont run when php has errors, however these are not recorded on a log file, I created one php_errors.log and entered the following on the php.ini file error_log = /usr/local/php5/logs/php_errors.log However errors are not written to this file and i have log_errors = true What could be the problem

place php errors in log file

I am running mac 10.6.4 on an iMac and am using it as a developer server. I have Apache and Entropy php5 installed, when i write my applications, some pages wont run when php has errors, however these are not recorded on a log file, I created one php_errors.log and entered the following on the php.ini file error_log = /usr/local/php5/logs/php_errors.log However errors are not written to this file and i have log_errors = true What could be the problem

Can ext4 fs be completely unrecoverable broken due to power loss while disk is writing?

itemprop="text"> Suppose you are doing a full speed write to disk in a linux pc box with ssd or mechanical disk (OS also on the same disk, there is no battery/UPS): cat /dev/urandom > omg.txt If the power losses suddenly during the process, or any other sort of ungracefully shutdown/reset. Will the file be corrupted and unable to fix (i.e. none of any data can be recovered?), will there be a chance the file system be completely unable to boot? Answer Will the file be corrupted and unable to fix (i.e. none of any data can be recovered?) Potentially, yes. There's 2 obvious routes via which this could happen. Ext4 is a metadata journaling filesystem - it only journals the changes to the file's meta data (size, location, dates) - not the file contents (btrfs and

Can ext4 fs be completely unrecoverable broken due to power loss while disk is writing?

Suppose you are doing a full speed write to disk in a linux pc box with ssd or mechanical disk (OS also on the same disk, there is no battery/UPS): cat /dev/urandom > omg.txt If the power losses suddenly during the process, or any other sort of ungracefully shutdown/reset. Will the file be corrupted and unable to fix (i.e. none of any data can be recovered?), will there be a chance the file system be completely unable to boot? Answer Will the file be corrupted and unable to fix (i.e. none of any data can be recovered?) Potentially, yes. There's 2 obvious routes via which this could happen. Ext4 is a metadata journaling filesystem - it only journals the changes to the file's meta data (size, location, dates) - not the file contents (btrfs and zfs do full-data journalling at a big performance cost). So although you should never have to fsck the disk, it doesn't follow that every write operation betwen opening the file and closing + flushing the buffers will

monitoring - Script to monitor specific processes' memory utilization

itemprop="text"> I have a client I do some remote server administration work for who recently started a 2-week trial period with New Relic. They have recently had some performance issues, and also asked me to take a look at things. They saw that their memory on one of their servers was only hitting 50% according to New Relic, and became suspicious that the memory metrics / graphs were off (and emailed New Relic about it). I spent some time this afternoon reviewing configuration files, CPU / Memory / IO graphs on New Relic, and making some recommendations. I'm not convinced that the memory metrics are wrong. For example, on my client's DB server, New Relic said MySQL was using (at times) 3,481Mb of RAM. I ran PS to obtain the pid for MySQL, and ran the following command: cat /proc/29313/status Which o

monitoring - Script to monitor specific processes' memory utilization

I have a client I do some remote server administration work for who recently started a 2-week trial period with New Relic. They have recently had some performance issues, and also asked me to take a look at things. They saw that their memory on one of their servers was only hitting 50% according to New Relic, and became suspicious that the memory metrics / graphs were off (and emailed New Relic about it). I spent some time this afternoon reviewing configuration files, CPU / Memory / IO graphs on New Relic, and making some recommendations. I'm not convinced that the memory metrics are wrong. For example, on my client's DB server, New Relic said MySQL was using (at times) 3,481Mb of RAM. I ran PS to obtain the pid for MySQL, and ran the following command: cat /proc/29313/status Which output the following for VmRSS: VmRSS: 3566268 kB (which is 3,482Mb of RAM) my.cnf has some settings I recommended that they tweak (key_buffer_size, etc...). [ Edit : If you're curious, this is

hp proliant - HP DL360 G6 SAS-4xSATA splitter

itemprop="text"> I have HP Proliant DL360 G6 server with 1 CPU, it runs Windows Server 2012. I wanted to connect extra SATA disks, so I've bought SAS-4xSATA splitter cable like this: SAS - 4xSATA cable . Problem is that after connecting the cable with SATA disks (with external power), Windows Server /machine does not see any of the disks attached to splitter cable. (I have already connected other SATA disks with PCIe cards) I don't know if I'm trying to do it right... Do I need to configure something elsewhere (to see the extra disks on splitter cable) or do something else? Answer HP RAID 410i controller dosn't support JBOD mode of disks. Create new logical drives from real HDD's in BIOS RAID setup mode when server will boot. Windows will see only logical drives on this controller. You

hp proliant - HP DL360 G6 SAS-4xSATA splitter

I have HP Proliant DL360 G6 server with 1 CPU, it runs Windows Server 2012. I wanted to connect extra SATA disks, so I've bought SAS-4xSATA splitter cable like this: SAS - 4xSATA cable . Problem is that after connecting the cable with SATA disks (with external power), Windows Server /machine does not see any of the disks attached to splitter cable. (I have already connected other SATA disks with PCIe cards) I don't know if I'm trying to do it right... Do I need to configure something elsewhere (to see the extra disks on splitter cable) or do something else? Answer HP RAID 410i controller dosn't support JBOD mode of disks. Create new logical drives from real HDD's in BIOS RAID setup mode when server will boot. Windows will see only logical drives on this controller. You can see same answer at https://serverfault.com/questions/267751/hadoop-jbod-disk-configuration-on-hp-smart-array-410-i-disk-controller

iptables - (dnat|redirect) with masquerade doesn't work

I have a problem, till a bit ago it was working just fine. but now, it doesn't work, but on another testing server it works just fine I force all traffic to tor, and this part works just fine. Problem is on masquerade I think as it not change the dnat/redirect port 9040 to the origin port 80/443 back after receive response href="http://ipinfo.io:9040" rel="nofollow noreferrer">http://ipinfo.io:9040 Software: 1.6.1-2ubuntu2 ubuntu 16.04/18.04 the same result tor 0.3.2.9-1build1 Networking virbr1 - 192.168.2.0/24 - host only eno1 - 192.168.1.0/24 - internet Iptables: /sbin/iptables -t nat -A POSTROUTING -o virbr1 -j MASQUERADE /sbin/iptables -t nat -A POSTROUTING -o eno1 -j MASQUERADE # on another server works without that, this was just for testing /sbin/iptables -t nat -A PREROUTING

iptables - (dnat|redirect) with masquerade doesn't work

I have a problem, till a bit ago it was working just fine. but now, it doesn't work, but on another testing server it works just fine I force all traffic to tor, and this part works just fine. Problem is on masquerade I think as it not change the dnat/redirect port 9040 to the origin port 80/443 back after receive response http://ipinfo.io:9040 Software: 1.6.1-2ubuntu2 ubuntu 16.04/18.04 the same result tor 0.3.2.9-1build1 Networking virbr1 - 192.168.2.0/24 - host only eno1 - 192.168.1.0/24 - internet Iptables: /sbin/iptables -t nat -A POSTROUTING -o virbr1 -j MASQUERADE /sbin/iptables -t nat -A POSTROUTING -o eno1 -j MASQUERADE # on another server works without that, this was just for testing /sbin/iptables -t nat -A PREROUTING -p udp --source 192.168.2.6 ! --destination 192.168.2.1 -j DNAT --to-destination 192.168.2.1:9040 # tested REDIRECT --to-ports 9040, the same *filter :INPUT ACCEPT [174876:86417485] :FORWARD DROP [0:0] :OUTPUT ACCEPT [170612:89138010] :D