Skip to main content

Posts

Showing posts from July, 2015

domain name system - What is the real-world impact of glue records on DNS resolution time?

It is common to provide 'glue' records along with NS records, to spare DNS resolvers the bother of looking up the IP address of the DNS server itself. However, it is not even desirable to provide glue in some situations (eg. example.com NS ns.example.net ... it is not appropriate for a .com nameserver to dish out information about .net domains). In that case, there is a risk that DNS resolvers, lacking the glue information, will take longer to resolve the domain. But how much longer? Is it worth defining my own A record for ns.example.com to point to the IP address of ns.example.net (a so-called 'white label' or 'vanity' nameserver, such that I can provide glue? Or does it not matter much in practice? . Why not? Answer Glue is needed when the name servers for a given domain e.g. example.com are host names inside that domain, e.g. ns1.example.com. Without that glue it would be impossible to resolve anything inside example.com. In fact yo

domain name system - SPF record for Gmail?

I have DNS, with a SPF TXT record, configured for a domain name. The primary user of the domain name now needs to be able to send both from our SMTP servers, and also from her GMail account. I've seen all the information about adding "include:_spf.google.com" to the SPF TXT record, but, as I look into it, it appears that record is outdated. In particular, I had the user send me a test message, and note that it was: Received: from mail-la0-f50.google.com (mail-la0-f50.google.com [209.85.215.50]) However, _spf.google.com doesn't list that IP address: $ dig +short _spf.google.com txt "v=spf1 ip4:216.239.32.0/19 ip4:64.233.160.0/19 ip4:66.249.80.0/20 ip4:72.14.192.0/18 ip4:209.85.128.0/17 ip4:66.102.0.0/20 ip4:74.125.0.0/16 ip4:64.18.0.0/20 ip4:207.126.144.0/20 ip4:173.194.0.0/16 ?all" (Note that a 209.85.21 8 .0 network is listed, but not 209.85.21 5 .0.) Is there a better way to enable sending from GMail? This user sends to at least one recipient with a stri

how to add a chinese domain name into a Windows hosts file ?

lets say I have a domain 我等主营.com that I want to add it into Windows hosts file for development purposes as below: 127.0.0.1 www我等主营.com I could save the modification with Notepad++ but when puting the name on a web browser www.我等主营.com, it does not go to 127.0.0.1 but instead it resolves to the domain's public IP or whatever it is. It works fine for me with English domain names. How to do it ? Answer I think you are required to convert the 我等主菅.com into punycode representation in the hosts file. 127.0.0.1 xn--tiq769bnnsi9h.com

SQL-Server SQLEXPRESS slower then real SQL 2008 Server?

I got my first bigger project with around 150 users at every day since a week. For my smaller projects like a company homepage with 100 visits each day, I installed the free sqlexpress. Now it all semms a bit slow and I really try to find out why. My idea was and my question is: Is the SQLServer Express slower then the SQL Server 2008 full install ?

hardware - HP ProLiant DL360 G5 fails to reboot

Some history of this also available here . As of my latest update on the forum linked above, now cold reboot as well as hard reset and following boot for my proliant dl360 g5 works as expected and successfully boot the system. However, soft-reboot results in internal health indicator turning RED on the front and long beeps every around 6seconds until I either cold reboot or press and hold the power button. A summary of system state when this happens: Internal Health led indicator: RED External Health led indicator: Green No led next to any component is red or amber No POST message neither on the video output nor on IML logs (verified both at the time issue occures and after cold reboot that boots the system successfully). Any thoughts please share. I hope we can knock this issue down together with your help!

Restrict access to SSH for one specific user

I am looking for a way to secure my servers with the following setup: I have a server where I can log in via SSH. The main account there (named "foo") is secured by a keybased login with password. I have another user account (named "bar") that I use to log in via cronjobs running on other servers - this one also has keybased login, but without password. Now I want to limit access to this machine for the "bar" account. The account should only be accessible via known IPs. However, the "foo" account should not be affected by this, this one should basically be accessible from any IP. How can I manage this? Or is there a simpler solution to everything? Answer Manage this with ssh 's AllowUsers directive. In /etc/ssh/sshd_config : AllowUsers foo bar@hostname Put the IP and hostname of bar 's machine in /etc/hosts on the ssh server (because DNS might be unreachable and the IP might change), restart ssh, and you're all set.

domain name system - Should i disable DNS recursion?

I have two Domain Controller, both are DNS server and i have set Forwarder for both ( as per below Print-screen ) but i have not disable recursion on both server ( please see below print-screen ) there is one recommendation to disable DNS recursion. I think if i disable DNS recursion it will affect performance, but i also want to have best security placed. Please let me know what should i do? should i disable DNS recursion?

Active Directory Domain Names - Forest/Tree/Children

I've been doing some reading on suggested top-level-domains for AD and whatnot. I used to setup domains as company.local and that worked just fine, however, more people want to use their external domain company.com instead of the .local suffix. I've got a quick clarification question, how am I supposed to set up my first forest if we're going to actually use our registered domain name? It's easy enough to setup a new forest with company.com but wouldn't I then have to add a child-domain of corp.company.com to a new DC? Essentially requiring two DCs just to set up the one domain. Or would I create the first forest as corp.company.com and be done with it? That seems to make a lot more sense. Answer Bingo on your last statement. Set up your AD forest as corp. corp.company.com. Edit: Also read this post by MDMarra: What should I name my Active Directory?

Set Nginx https (on port 443) without the certificate?

I tried to follow up this thread as much as possible but I am always getting this message: This site can’t be reached example.com unexpectedly closed the connection. Try: Checking the connection Checking the proxy and the firewall ERR_CONNECTION_CLOSED ReloadHIDE DETAILS in Chrome. The part of the configuration I am having is this: server { listen 80; listen 443 default_server ssl; #ssl on; server_name example.com www.example.com; This is for my test website example.com on my local 127.0.0.1 computer. Answer You will have to generate a Self signed certificate or you could look into the Let's Encrypt project to get a free and publicly trusted cert. HTTPS is for secure traffic, and you can't do the encryption without the cert for the public and private keys. Adding a link: https://letsencrypt.org/

domain name system - Create DNS alias

How i can create DNS alias? ie.: i have one domain named example.com and second domain - example.net . Zone-file of first domain contents: example.com. IN A 10.1.2.3 www.example.com. IN A 10.1.2.3 *.example.com IN CNAME example.com. What should i write to second's domain zone-file, that redirected from http://www.example.net to http://www.example.com ? Answer dns does not handle http redirection.. You can just cname example.net to example.com just fine.. but you would have to do the redirect on your webserver side

initramfs - Possible to use /dev/mapper in custom initrd (CentOS/RHEL)?

I'm attempting to setup a mapper device prior to boot as it requires use of my /var partition which is relied heavily upon startup. My issue is it's seemingly failing and not providing any output - I'm unsure if this is something I'm doing terribly wrong or a limitation I'm simply not aware of. I'm using flashcache and have it loading with my initrd. I also have the binaries for flashcache working fine it however just fails. I've merely added the following to the initrd init script: setuproot echo Creating flashcache volumes for var flashcache_create -p around sdcachehome /dev/sdb1 /dev/sda7 echo Switching to new root and running init. switchroot Unfortunately it's spitting out a unhelpful "failed" and then continues on with the boot process. Creating flashcache volumes for var cachedev sdcachehome, ssd_devname /dev/sdb1, disk_devname /dev/sda7 cache mode WRITE_AROUND block_size 8, cache_size 0 Flashcache metadata will use 38MB of your 15995MB

licensing - How are SQL Server CALs counted?

Running a SQL Server, as far as I understand it, you need one CAL for every user who connects to the database server. But what happens if the only computer which is accessing the SQL Server is the server running your business layer? If, for example, you got 1 SQL Server and 1 Business logic server, and 100 Clients who all just query and use the business logic server. No client is using the SQL Server directly, no one is even allowed to contact it. So, since there is only one computer using the SQL server, do I need only 1 CAL??? I somehow can't believe this would count as only 1 CAL needed for the SQL Server, but I would like to know why not. Answer You need CAL's for every user of the business logic server, even though there is no direct connection between them and the SQL server. Microsoft use the term "multiplexing" for the scenario you describe. This is for SQL 2005 but I don't think it is any different for other versions: A CAL is r

linux - Forward SSH request to an External Server to an Internal Server

I have a VM config currently where I have 1 external IP pointing to a VM with an Nginx HTTP Reverse proxy serving HTTP Web Pages from several internal VMs without external IPs. I need a similar setup to re-direct SSH requests to certain hostnames to internal servers, this is for a hosted git repository which sits behind by proxy and thus has no external IP of its own, so I require some kind of iptables rule to allow me to reverse proxy/forward SSH requests to corresponding VMs. Can anybody point me in the right direction?

linux - Why am I getting a Sudden Drop in throughput and sluggishness with no CPU increase

Occasionally, during random parts of the day, I get a 10 minute period of extreme sluggishness where my requests are taking 50-1000 times longer then they normally do. Note: I am on Apache/2.2.16 (Debian), running PHP 5.3.3 Newrelic shows that the time is not spent in the Database, it's supposedly spent while PHP is executing before the first line of code (according to some traces). At the same time, I see a huge drop in throughput to nearly 1/3 the normal amount. When I look at the graphs, I can see that CPU, Memory, Disk IO, and CPU waitIO are all at steady levels: No spikes at all. I don't see any error messages in the error log for PHP or the web server during that time. The server has more then enough memory, according to newrelic it's only using about 25%. Total memory is 3.3 GB. Note: The load average is about .25 on two cores, hence load is fairly low. I typically get about 1000-1500 requests per minute. response times are usually 15ms to 150ms. here are som

active directory - Single Computer not Accepting Domain Credentials after being added to Domain

I have one computer that refuses any domain credentials. This computer is running Win10 and my DCs are Server 2012 r2. After adding this particular computer to the domain it will not accept any domain user password. It simply says "The user name or password is incorrect. Try again". I know the password works, as the same domain administrator password was used to add the computer to the domain successfully. I've tried multiple domain accounts unsuccessfully and was able to manually add a domain user account under manage users via the local admin. I was not able to add the domain\Administrator because it says there is already an account. I even tried going through the Network ID wizard which accepted the domain admin account and password however it still won't log in. DNS is pointing correctly to our 2 domain controllers. It is showing under Domain Computers in AD and the DNS server shows forward and reverse lookups for it. However it is not showing up under DHCP leases

Active Directory forest with same name as root DNS zone and browsing to site with same name

Related to this previous question of mine about why it's a bad idea to use the root domain name as your Active Directory forest's name... I have an employer, whom I will refer to as ITcluelessinc, for the purposes of simplicity (and honesty). This employer has an externally hosted website, www.ITcluelessinc.com , and a few Active Directory domains. Being clueless about IT, many years ago, they set themselves up on an Active Directory forest named ITcluelessinc.prv , and performed unspeakable atrocities against it. These unspeakable atrocities eventually caught up with them, and with everything collapsing around them, they decided to pay someone a huge chunk of money to "fix it ," which included migrating off the horribly broken ITcluelessinc.prv forest. And of course, being clueless about IT, they didn't know good advice when they heard it, accepted the recommendation to name their new AD forest ITcluelessinc.com , instead of the sane advice they also got, an

nameserver - Is it required to register a name server with GLUE records?

I have a few domains and want to resolve their DNS records with my own name server. Let's say I have a DNS server with 2 fixed IP addresses and a domain name mydnsservers.net . I'd like to have 2 name server -subdomains- for my other domains. ns1.mydnsservers.net > 81.250.18.12 ns2.mydnsservers.net > 81.250.18.13 Can I just use a third party DNS (e.g. AWS Route 53) for mydnsservers.net and setup two A-records like this? ns1. A 81.250.18.12 ns2. A 81.250.18.13 Or is it mandatory to use my own DNS server for mydnsservers.net and configure GLUE records at the TLD registry? I know that the first option works in some cases, but my new registry gives an error when trying to use ns1.mydnsservers.net for one of the domains because it's not registred as a nameserver (doesn't have glue records). Any help would be much appreciated!

samba - Active Directory authentication without Kerberos?

A friend of my has a Linux machine hosting Jenkins and a Windows 2008 Domain Controller. He uses Active Directory authentication in Jenkins and only specified the domain name and domain controller in the Jenkins configuration. All user can use their Windows domain user name and password to access the Jenkins web interface. I don't understand how this is possible? I have learnt that you have to use Kerberos for user authentication in an Active Directory environment. The website of the Active Directory Jenkins plugin says that they are using "LDAP service of Active Directory". I try to find something like this on my Windows 2008 server but couldn't find it. Does the "Active Directory Lightweight Directory Services" emulate a LDAP server and Jenkins just tries to access the directory with the user name / password given (if the test succeeds, access to the web interface is granted)? If Kerberos is not necessary to authenticate AD users, is it possible to authent

hp proliant - HP Smart Array P400/256MB SAS Controller BATTERY

This is probably a dumb question. I recently purchased a couple of refurbished HP SB40C storage blade for our HP Bladesystem c3000 enclosure. Each comes with an HP Smart Array P400/256MB SAS Controller and a 4.8V NI-MH cache raid battery. The smart array utility warns me about a low charge on the battery for each storage blade. Could it be because they probably haven't been turned on for a while just sitting around in some storage room? I mean of course batteries will drain out on their own, but I'm assuming this is the case rather than a coincidence that both batteries are dead. So just what would most people assume in this case? Also, how would it affect my array if I use the controller without batteries? PS: I cant be sure because my enclosure doesn't have enough power supplies (arriving soon) to turn on 4 blades and hence possibly charge the batteries. I have to make a quick decision on whether the order new batteries ASAP or count on their simply needing to recharge.

Connecting SAS tape unit to existing P410 controller in HP Proliant ML330-G6 under VMware ESXi 4.1

I have an HP ML330-G6 server with a single SAS/SATA drive bay unit containing a pair of mirrored 250GB SATA disks, running off a P410 SAS controller - all working fine. Now I have just fitted an Ultrium 1760 SAS tape unit into the server and I can't get both the tape and the disks recognised under VMware. So far I have tried two SAS configurations: Disk unit on one port of the P410 + tape unit on the other port using its provided cable Disk unit + tape unit on the SAS splitter cable supplied with the tape unit - tried on port 1 and port 2 of the P410 controller If I use the Array Configuration Utilities for the server it always shows that I have a tape unit + disks attached, but VMware stubbornly only shows the tape unit. I have tried generic ESXi 4.1 and also the 'HP tweaked' version. Any relevant words of wisdom appreciated. Thanks. Answer I'd give the Ultrium drive a dedicated 1-port internal SAS controller and skip trying to to leverage the existing P4

Determining MySQL Settings

We are running a server right now and the MySQL database is crashing quite often. We know that we need to find good settings for MaxClients and MaxRequestsPerChild as we are getting lots of traffic and when we get a spike the database goes down. Is there a good rule of thumb or formula that would help us figure this out?

Do the SSH or FTP protocols tell the server to which domain I am trying to connect?

When using the ssh or ftp commands from the Bash shell, does the server that I am connecting to learn of the domain name used? I understand that the domain name is locally translated into an IP address via DNS. In HTTP, after that happens, the server is told the original domain name as well in order to serve the correct page, or to present the correct TLS cert (SNI). host serverfault.com GET / Does a similar phenomenon happen when connecting to ssh or ftp ? I ask because I am trying to ssh into a server (GoDaddy webhosting) which expects a domain name, but is not letting me in when I try to connect via user@IPaddress as the DNS is not yet moved to the GoDaddy IP address. Answer No, the SSH clients do not pass the DNS name you connected to on to the server. As you said correctly, the name is resolved locally to the IP address. It looks like I was wrong about FTP. See the other answer for details.

HP ProLiant DL380 G7 servers will not POST

We have just received 2 HP DL380 G7's from our DR site. They have been running fine at the site for some time but we have tried to power them up in our DC and they will not post. We get a very brief flash of the post screen and then the systems power cycle. Both systems behave in the same way. Has anyone come across this issue before? There are no beep codes. We have tried removing the battery as well as the hard disks but it appears to be an issue that occurs prior to the system posting. Answer Check your KVM... Try with different keyboard/monitor or run headless. Don't repeat the mistakes made here . See: HP ProLiant DL360 G7 hangs at "Power and Thermal Calibration" screen Edit: It would be important to get into the ILO to see server messages. The ILO's settings are persistent, so removing the battery won't help you. Even if your issue is not KVM-related, the rest of the flowchart above should help you isolate the issue. If you have physical ac

HP P410 RAID CARD Issue - unassigned drives not detected by the OS

i just bought lot of HP DL180 G6 Servers, it's using HP P410 RAID Card, i plugged 12 x 2TB SAS Drives + 2 x 73GB SAS (for OS - RAID1) all drives showing up IN RAID Creating array page, so i just created RAID1 Array for 2x73Gb to install OS (Centos 6) OS installed fine, but i can't detect the rest of unassigned drives (that's not in RAID Arrays) so my Q. is : is there any way to make unassigned drives showing up in OS (without creating raid array for them) ? i don't want to create RAID arrays for them, i just want them showing up like all Dell servers i have, if i change RAID Card Mode to HBA, is that help ? any advices ? Thanks Answer The Smart Array P410 (and all Smart Array controllers) are RAID devices only. There's no HBA or pass-through mode. What are you attempting to do? Are you installing something like ZFS or Windows Storage Spaces where you want to pass full disks to the operating system to be managed? If so, creating a bunch of RAID 0 single

centos7 - Connection refused on RHEL localhost with running server

I'm running a Java based system on CentOs and I sometimes see that clients of a service experience "Connection refused" errors for short periods. The interesting is that the server is, at the same time, serving requests. The error is intermittent. Is it possible that the listener socket's backlog fills up for a short time and it causes new requests to be refused? I'm running out of clues and I wasn't able to reproduce this outside the continuous integration system so far.

centos6 - Bounced mails not reaching postfix

I have postfix and dovecot installed on a Centos 6.4 production (www.bw.co.uk) machine. The intent is to send all system and transactional messages and collect bounces. The email id's that resulted in a bounce are then flagged to prevent site from sending further messages to these id's. I have the necessary SPF records setup on the DNS and PTR records setup on my hosting providers end. My MX records point to a different mail server where our staff sends and receives email. My issue is though I have been able to send mail from the production machine I am unable to read bounces using my php snippet. Actually I do not even know if the bounces are reaching the machine! I have a similar setup on my test (www.st.biz) machine with the same hosting provider the MX, SPF and PTR records are setup in a similar fashion i.e. the MX records point to another mail server where the staff sends and receives mail. On the test machine I am able to read the bounces using the PHP program. The postfi

OOMK kills mysql and apache when there is still a lot[?] of mem

let me first say that I'm pretty new ti *nix systems and even more to server management. Anyway, I've got a little problem. I got VPS with 1gb mem, system is debian 6. I have few sites running on it, though some load can only be caused by one of them. Recently, OOMK started to kill mysql, causing wp and phpbb giving error that it can't connect to mysql server. Error itself is not good, especially if it happens at night and site becomes unavailable until I wake up and restart mysql. I have probably bad line in my cron which can be cause of it all (again, I'm new to it) */20 * * * * sync; echo 3 > /proc/sys/vm/drop_caches Well, if you need any information, let me know, since I don't really know which information can be useful here. Also, I'd like to know if it's not too bad to have above cron task.

domain name system - How to block external access to the DNS service running on a Cisco router?

I have a Cisco (877) router acting as the main gateway for a network; it has a DSL connection and performs NAT from the internal network to its external, public IP address. The router allows SSH access for management, and this has been limited using an access list: access-list 1 permit line vty 0 4 transport input ssh access-class 1 in The router's internal web server isn't enabled, but if it was, I know its access could be limited using the same logic: ip http access-class 1 Now, the gotcha: this router also acts as a DNS server, forwarding queries to external servers: ip name-server ip name-server ip dns server My problem is: the router is perfectly happy to answer DNS queries when receiving them on its external interface . How can I block this kind of traffic so that the router only answers DNS queries from the internal network? Answer !Deny DNS from Public ip access-list extended ACL-IN_FROM-WAN remark allow OpenDNS lookups permit udp 208.67.222.2

ubuntu - Apache DocumentRoot Doesn't Seem to Work

I have a basic Apache2 configuration, with only one VirtualHost enabled. I've set the DocumentRoot and the Directory to reference the index directory of the website I would like to display, however when I bring up the server index in a browser, it points to /home/user/public_html rather than /home/user/public_html/website Is this somehow intended or is my setup incorrect? Here's the virtualhost setup: DocumentRoot /home/user/public_html/website Options Indexes FollowSymLinks MultiViews AllowOverride All Order allow,deny allow from all ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/ AllowOverride None Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch Order allow,deny Allow from all ErrorLog /var/log/apache2/error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog /var/log/apache2/access.log combined Alias /doc/ "/usr/share/doc/" Options Indexes MultiViews FollowSymLinks AllowOv

ErrorDocument when using Apache as reverse-proxy

'evening, I have an Apache server (2.2.19) and a 4D Webstar server (kinda unknown HTTP server) on the same physical server. Apache only listens the SSL port (443) and is used as a reverse-proxy to 4DW (through a SSL VHost). Here are the two proxy directives I use : ProxyPass / http://127.0.0.1:xxxx/ timeout=15 connectiontimeout = 15 ProxyPassReverse / http://127.0.0.1:xxxx/ Given the fact that the 4DW server can go offline from time to time, i'd like to have a custom 503 error page to notify the users of the downtime or maintenance of the back-end app. Except Apache redirects everything from /htdocs/ (which is the documentroot) and we need quick access to the errordocument to edit it when needed (thus, an external errordoc is not an option). Is there any way to force Apache not to proxy a given directory (let's say htdocs/error/), or any solution at all as to using an errordocument outside of the DocumentRoot? Cheers Answer Sure, just exclude it from the Proxy

mac osx - Connection Refused for Apache 2.4 Virtual Host

I have Apache 2.4 running on Mac OS X. apachectl configtest give me: Syntax OK. I have two virtual hosts set up, one called localhost, one called test.dev. DocumentRoot "/Users/psychomachine/Development/_localhost" ServerName localhost ServerAlias www.localhost Require all granted DocumentRoot "/Users/psychomachine/Development/test" ServerName test.dev ServerAlias www.test.dev Require all granted localhost just works: ↪ curl -I -L localhost 15:51:08 HTTP/1.1 200 OK Date: Tue, 08 Dec 2015 14:51:17 GMT Server: Apache/2.4.16 (Unix) Last-Modified: Tue, 08 Dec 2015 08:52:04 GMT ETag: "c-5265f1673f500" Accept-Ranges: bytes Content-Length: 12 Content-Type: text/html whereas test.dev doesn't: ↪ curl -I -L test.dev

storage - Will a raid cache be better for performance than hard drive cache speeds?

In a device that has raid 5, and a "raid cache" of 4GB, does this help performance? I have 32MB cache hard drives but I was wanting to know if the 4GB cache will make up for the difference in performance between a 128MB hard drive cache and a 32mb cache for the hard drive. Hardware raid. The front end of the device is a 2008 R2 server, Dell R610.

domain name system - Moving a Registered Authoritative Nameserver to Another Server

We had a colo server and registered a domain for it. When we did that we had to provide the name and the IP address of the nameservers (pointing to the server) as the nameserver names were within that domain (let's call them ns.example.com and ns2.example.com) Over time various people also hosted with us and we just got them to change their authoritative nameservers to ns and ns2.example.com. We're in the process of migrating services to a shiny new server at a different location with (obviously) different IP addresses. We have now moved all the content for us and others and updated the DNS records to point A/MX etc to the new server. Now we wish to move the DNS service itself with the aim of getting rid of the original server entirely. So the question is; if we just change the DNS for example.com and point ns and ns2 A records to the new server will it work (allowing time to propogate with DNS served from both servers)? My concern is the original need to register the IP addres

linux - CIFS mounted drive setting "stick-bit" on all files, cannot change permissions or modify files

I have a folder mounted on an Ubuntu 8.10 sever through cifs that I simply cannot change the permissions on once mounted. Here is a breakdown of what's going on: All files within the mounted folder automatically have their permissions set to -rwxrwSrwx regardless of whether the file is create on the windows server or on the linux machine. I have the same directory mounted on two other linux servers (both running 9.10 instead of 8.10) with no problems at all. They all are using the same fstab options and the same credentials. //server/folder /media/backups cifs credentials=/etc/samba/.arcadia_cred,noexec,noserverino 0 0 I've I run a chmod command a million different ways, all of which report successfully changing the permissions. However it doesn't. The issue began after I updated from 8.04 to 8.10 Any idea why this may be happening on one machine? Since it started after an upgrade I'm not sure what is the bes thing to do. Any help you could give would g

ubuntu 14.04 - Password policies for Google Cloud Platform

Is there a way to set password policies for accounts accessing a project on Google Compute Platform? Specifically, I need to meet the PCI-DSS requirements, which include things that pam would normally handle on ubuntu. These include expiring passwords every few months, minimum password strength, and preventing re-use of passwords. For clarity, I'm asking about the developers and admins that have access to the machines, not an application running on the cloud instances.

nginx reverse proxy without base url

I have an application that I can't configure a base url. Let's say that its url is 192.168.1.100:8011 I want to configure the nginx so I can entrer an url like 192.168.1.100/myapp and it goes to the other app. The configurations that I'm used to do only work when I have a base url. For example if I have an app on 192.168.1.100:8011/myapp and I wnat to use nginx for using 192.168.1.100/myapp , I have no problem but the other way I cant do it. Is that possible ?

domain name system - How to combine Active Directory with Split-Horizon DNS?

I'm in a quandary. I need to implement Split-Horizon DNS in my office based on subnet . For example: Users in 10.170.0.0/16 need to resolve "srv01.extra.company.com" to 10.170.0.5 10.25.0.170 Users in 10.180.0.0/16 need to resolve "srv01.extra.company.com" to 10.180.0.5 10.25.0.180 Others in 10.0.0.0/8 need to resolve "srv01.extra.company.com" to 10.25.0.5 Now, this is easy to implement using BIND. Unfortunately, my network is based on Active Directory; I can't possibly change the DNS Servers of all workstation to just point to the BIND server, can I? They need to be pointing to Domain Controllers. I had been playing with the idea of using stub zones or conditional forwarders , but based on my understanding, those methods will make the Domain Controllers to perform the DNS resolution themselves, instead of having the workstations contact the relevant nameservers. What can you suggest to help solve this split-horizon problem? Additional info: The

security - Has my Linux server been compromised? How do I tell?

Running (X)Ubuntu 10.04.2 LTS behind a router. I just received an email from my root account on that machine, with the following subject: *** SECURITY information for : The message body contained this warning: : jun 1 22:15:17 : : 3 incorrect password attempts ; TTY=unknown ; PWD=/ ; USER=root ; COMMAND=/bin/sh /tmp/tmpPHBmTO I can see no /tmp/tmpPHBmTO file, though there is a file named /tmp/tmpwoSrWW with a timestamp dating from 2011-06-01 22:14, so just before the mentioned date/time. It's a binary file, and the content doesn't look familiar to me. Also, that file only has -rw------- permissions. As I read it, this means that someone (or something) has (had) access to my machine. Apparently not root access (yet), but still, enough to write files to my /tmp directory at the very least. Does someone have any pointers as to where I could look for more information: who could have done this, and how they could have done this? My router is configured t

Apache localhost works great, except PHP is not responding

I'm running Apache on Mac, PHP is installed, and I suspect something in the httpd.conf is not correctly set. The symptom: Running PHP files in localhost yields PHP code, nothing else. Running apachectl -D DUMP_HOSTS yields: Permission denied: make_sock: could not bind to address [::]:80 (13)Permission denied: make_sock: could not bind to address 0.0.0.0:80 no listening sockets available, shutting down Unable to open logs Help.

debian - sometimes, crontab is not reloaded by cron daemon

I'm asking this question, because I couldn't find the answer here : Why is my crontab not working, and how can I troubleshoot it? Context We have several servers running debian/wheezy. One backup task requires that we deactivate the crontab of a specific user during the backup, so we have a script, run daily, which roughly does : # user is legec : # save the crontab to a file crontab -ulegec -l > /home/legec/.backup/crontab # empty the crontab echo "" | crontab -ulegec backup ... # reload crontab cat /home/legec/.backup/crontab | crontab -ulegec And this works as we expect, the vast majority of times. This task runs on ~80 servers ; depending on the server, the backup task will take from 1 minute up to 2 hours. Bug Once in a while, cron will not detect the last reload, and will not execute any of the jobs listed in the crontab. The file in /var/spool/cron/crontabs/legec has the expected content, and modification date : $ ls -lh /var/spool/cron/crontabs/legec -rw--

domain name system - DNS A vs NS record

I'm trying to understand DNS a bit better, but I still don't get A and NS records completely. As far as I understood, the A record tells which IP-address belongs to a (sub) domain, so far it was still clear to me. But as I understood, the NS record tells which nameserver points belongs to a (sub) domain, and that nameserver should tell which IP-address belongs to a (sub) domain. But that was already specified in the A record in the same DNS file. So can someone explain to me what the NS records and nameservers exactly do, because probably I understood something wrong. edit: As I understand you correctly, a NS record tells you were to find the DNS server with the A record for a certain domain, and the A record tells you which ip-address belongs to a domain. But what is the use of putting an A and an NS record in the same DNS file? If there is already an A record for a certain domain, then why do you need to point to another DNS server, which would probably give you the same info

Which steps are required to avoid my server being considered as spam sender?

I'm looking to set up a webmail server that will be used by a lots of users that will receive and send emails. They will also have the possibility to forward emails they receive. I'd like to know which steps are recommanded/required to indicate to others Mail services (GMail, Outlook, etc) that my server is not used as a spam sender (disclaimer : IT's NOT ! :p) but a legitimate one. I know I have to define a SPF TXT records for example, but what others steps would you recommend me to do ? For example, is there a formula like having a proportional number of servers based on the amount of email sent (for having a different IP address) ? (something like sending a maximum of 1M emails / per IP / per day ?) Something else I'm missing ? I tried to search online, but I mostly find how to avoid emails sent with scripts (like PHP) being put in the SPAM folder. I'm looking for a server/dns configuration side. Thanks a lot for your help/tips, I appreciate ! Answer

domain name system - Why aren't our DNS records propagating out into the internet?

We run the name servers for our domain on our network. We use bind/named. Lets call the domain example.com . One thing I've noticed recently, when I goto a website like http://network-tools.com and run queries on URLs defined on our name servers, I see changes instantly. For example, if I add an entry to our DNS server for the url funny.example.com and then look up that url on http://network-tools.com , I see the proper external static IP listed for it immediately. That is telling me that any DNS requests related to example.com are coming straight to our DNS servers every time. My suspicions were confirmed earlier in the week when our DNS servers went down for a very short period. And during that time period, if I used http://network-tools.com to query example.com or any of its subdomains, I would get zero results. Obviously its because the DNS servers were down and couldn't be reached. So this brings me to my question. I thought changes to our DNS servers should be propog

performance - How to ask antivirus software to work slower and hence use less disk access?

It is our policy for our end-user's computers (usually laptops) to have high power CPUs, GPUs, RAMs (no less than 16GB) and HDD space (1TB), but we save money by choosing lower rotation speed of HDD. We have very high rotation speed for our servers instead. Usually it works quite good, but antivirus software is raising problems. I can observe in Task Manager that if total (from all the processes) disk access is more than 4-5 MB/s, then the Taks Manager indicates 100% use of disk access and the other applications are slowing down visibly. Usually the antivirus software, especially scanner process is consuming the highest part of the disc access. Of course, I can assigner lower priority for antivirus software but this has impact of CPU use (which is not problem). But is it possible to slow down the disk access of antivirus scanner process? It is OK, that each downloaded file, each accessed web page is scanned in real time, but I don't see the necessity to have high disk access a

amazon ec2 - How to configure my Elastic Load Balancer to balance SSL traffic?

I'm totally lost so I apologize if I'm not making sense. I need to create a load balancer in EC2 for our application servers. I'm trying make the ELB balance traffic over SSL (8443). However, it's asking me for an SSL Certificate. It looks to be asking me for a public and private key (pem encoded). The servers behind the ELB have a keystore file, which our developers created using Oracle Java's keytool program. The file created is binary. It looks like the ELB is expecting a text, pem formatted key. Why does the ELB require me to enter a certificate? Can't the ELB just forward SSL traffic from one side to the other and let the servers handle SSL ? Are certificates/keystore file related and the keys must match on both the ELB and servers? The AWS documents said to create a private key and certificate using openssl. Can I just independently run openssl to create a SSL certificate for the load balancer and leave the keystore file on the servers alone? Thank

vmware vcenter - ESXi 6.7u2 DL380 GEN9 Hosts keep going unresponsive

We have recently upgraded to ESXi6.7 u2. Since the upgrade, our hosts keep going into an 'unresponsive' state. The VM's are all still running, although they show as 'disconnected' in VSphere. Can ping the host and the VM's fine. If I log onto the ESXi console and hit F2 to open the menu, it takes several minutes to actually switch the screen, and selecting an option in there takes forever. Until today I have been resetting the host to get it back online. However I have now been able to resurrect the host be restarting the management services (./sbin/services.sh restart) has brought it back online. I have seen some detail online about SIO causing this issue, but none of our datastores use SIO. Has anyone come across this before? We have an open ticket with VMWare support but they are not being particularly quick about it.

windows - Measuring SSD wear out behind LSI MegaRAID controller?

I'm trying to find out how to measure the total bytes written (or a percentage of maximum expected, either is fine) for a few RAID arrays behind LSI controllers. The controllers are all LSI MegaRAID SAS 9271-8i controllers. I've tried using MegaRAID Storage Manager and MegaCLI, but neither seems to show the information that I need. I've found a couple solutions online, but they only seem to be for Linux, where you can patch the kernel or use smartctl in unconventional ways. That won't work for me on Windows. I'd really like to avoid pulling the drives out, putting them in another machine, testing with SMART, and then putting them back. Would be a real pain in the neck. If it's important, each controller has two virtual drive groups of 4 disks each, in RAID10, with SAS SSDs forming the groups.

ssl - nginx for https and http always ends up in the http way

This is the first time I try to setup an nginx reverse proxy. What I want is I have a Subversion server running http on port 44801. Now I want to use nginx to listen on port 80 and forward but also listen on 443, do the ssl termination and then forward. This is my conf file: server { # Port 80 only on local network listen 80; server_name freundx; location /svn { proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_pass http://freundx:44801/svn; } } server { # ssl is local and external listen 443 ssl; server_name freundx some.domain.com; ssl_certificate /etc/niginx/ssl/mycert.crt; ssl_certificate_key /etc/niginx/ssl/mycert.key; ssl_protocols TLSv1 TLSv1.1 TLSv1.2; ssl_ciphers HIGH:!aNULL:!MD5; location /svn { proxy_set_header Host $host; proxy_set_he

apache 2.2 - Is it possible to point 2 virtual hosts with different domains and unique IP addresses to the same folder?

is it possible to point 2 virtual hosts with different domains and unique ip addresses to the same folder? If so should I be aware of anything? Also is it possible for each of those domains to have SSL certificates even though the are pointing to the same directory? Sorry if I sound confused. Thanks, Darren Answer Is it possible to point 2 virtual hosts with different domains and unique ip addresses to the same folder? Yes. Should I be aware of anything? You should read up on virtual hosting in Apache. Also is it possible for each of those domains to have SSL certificates even though the are pointing to the same directory? It depends. If they can use the same certificate, and that certificate is a wildcard certificate it's quite simple. e.g. If you have a two domains foo.example.com and bar.example.com you could use a wildcard SSL certificate on *.example.com for both of them. Otherwise, you'll have to try Server Name Indication , which is a bit more com

windows - active directory domain vs dns domain - are they the same?

I am new to windows environment and am trying to set up an AD and Domain controller Are an Active Directory domain and a DNS domain technically referring to the same thing? Must they be the same at all? e.g. if my DNS domain is "company.com", must my AD domain be "company.com" too? Reason for asking this question is because of the following observation I have seen workstations join Domains that I am quite sure are not actually valid domain names (e.g. xxx.local) I have seen 2 different domain controllers in 2 different network with different server hostname/domainname (FQDN) e.g. dc1. brancha .com, dc1. branchb .com allowing workstations in their own network to login to the same active directory domain (xxx.local) which is totally not related to the server DNS domain name. Am I missing something ?

hp - "Parity Initialization Status: In Progress" for long time

Two weeks ago I installed 6 new hard disks (HP 500GB 6G 7.2K SFF 2.5-inch 2-ports SAS DualPort Midline, 507610-B21) in a "HP DL 380 G5" with a "Smart Array P400" (RAM Firmware Revision 2.08, ROM Firmware Revision 2.08). I created a new logical disk with 5 physical disk in RAID 5 and a spare drive (array B): [...] => ctrl all show config Smart Array P400 in Slot 1 (sn: P61620D9SUKHBP) array A (SAS, Unused Space: 0 MB) logicaldrive 1 (136.7 GB, RAID 1, OK) physicaldrive 2I:1:1 (port 2I:box 1:bay 1, SAS, 146 GB, OK) physicaldrive 2I:1:2 (port 2I:box 1:bay 2, SAS, 146 GB, OK) array B (SAS, Unused Space: 0 MB) logicaldrive 2 (1.8 TB, RAID 5, OK) physicaldrive 1I:1:5 (port 1I:box 1:bay 5, SAS, 500 GB, OK) physicaldrive 1I:1:6 (port 1I:box 1:bay 6, SAS, 500 GB, OK) physicaldrive 1I:1:7 (port 1I:box 1:bay 7, SAS, 500 GB, OK) physicaldrive 2I:1:3 (port 2I:box 1:bay 3, SAS, 500 GB, OK) physicaldrive 2I:1:4 (port 2I:box 1:bay 4, SAS, 500 GB,

redirect - In Nginx, how can I rewrite all http requests to https while maintaining sub-domain?

I want to rewrite all http requests on my web server to be https requests, I started with the following: server { listen 80; location / { rewrite ^(.*) https://mysite.com$1 permanent; } ... One Problem is that this strips away any subdomain information (e.g., node1.mysite.com/folder), how could I rewrite the above to reroute everything to https and maintain the sub-domain? Answer Turn out my first answer to this question was correct at certain time, but it turned into another pitfall - to stay up to date please check Taxing rewrite pitfalls I have been corrected by many SE users, so the credit goes to them, but more importantly, here is the correct code: server { listen 80; server_name my.domain.com; return 301 https://$server_name$request_uri; } server { listen 443 ssl; server_name my.domain.com; # add Strict-Transport-Security to prevent man in the middle attacks

Email redirected to Gmail contains failed SPF

I have an email account on my domain, e.g adrian@mysite.com And on my dedicated server mysite.com I redirect messages from adrian@mysite.com to adrian@gmail.com Now, if an external user like john@example.com sends me an email to adrian@mysite.com , it properly lands in Gmail but in spam and Gmail headers says that SPF failed, because example.com doesn't designate mysite.com as a permitted sender. Is there something I can do about this? It doesn't sound right, mysite.com should not claim that is sending email for john@example.com , it should just be labeled somehow as a redirect ( from adrian@mysite.com to adrian@gmail.com ). Answer The behaviour you describe (passing envelope sender through unchanged) is traditionally how mail forwarding has behaved. After all if there's a problem with delivery that's where errors need to go. To avoid SPF issues forwarding services can use SRS 1 to construct a new (local) envelope sender address which routes to the or

web server - How many requests should my webserver be able to handle?

Not going into specifics on the specs since I know there is no real answer for this. But I've been doing load testing today with the ab command in apache. And got to the number of 70 requests per second (1000 requests with 100 concurrent users), on a page that is loading from 4 different DB tables, and doing some manipulation with the data. So it's a fairly heavy page. The server isn't used for anything else for now and the load on it is just me, since it's in development. But the application will be used daily by many users. But is this enough? Or should I even worry (just as long as it's over X requests a second) I'm thinking that I shouldn't worry but I'd like some tips on this. Answer 70 requests per second works out to an hourly rate of 252,000 page renders / hour. If you assume that the average browsing session for your site is 10 pages deep, then you can support 25,000 uniques / hour. You should probably check these numbers against

Remote MySQL connection fails (10060)

When I am trying to connect to a MySQL database from a remote computer I get a prompt saying: Connection Failed: [HY000] [MySQL][ODBC 5.1 Driver]Can't Connect to MySQL server on 'XXX.XXX.XX.XX' (10060) I have created a user account in the MySQL Administrator and added a host to enable remote access, I have also made an exception for my Windows Firewall on port 3306 but the connection still fails. What is the problem? Thanks!

Nginx rewrite prevent redirection

I am trying to make a url rewrite as following: location /statics/ { alias /var/project/statics/; } location /statics/cache { rewrite /statics/cache/(.*?)/(.*)$ http://$host/statics/$2 last; } Where URL would be as: http://ANY-DOMAIN.com/statics/cache/1.2.5/file/path/file.js The original file is: http://ANY-DOMAIN.com/statics/file/path/file.js The problem here is: URL is redirecting (changing URL) to original file. I want to prevent the redirection. Current status: Redirects to original URL

networking - Can't access port 80 on public IP

I have an ubuntu box at a local IP behind an Arris router from my ISP. I've setup port forwarding on the router for port 80 such that it forwards TCP/UPD to the local IP of the ubuntu box. However, I cannot telnet using the public IP, and I can't access apache's welcome page on ubuntu via the public IP. I can telnet to ubunut's local ip from my mac on port 80 and can ssh into it as well. I have also disabled the firewall on Ubuntu for testing, yet I'm still unable to access ubuntu via the public IP. I have checked and port 80 is open on the public IP so is 22, but not 21. Not sure where to go from here. Any advice? Edit : The output of traceroute is as follows: traceroute to 72.24.237.82 (72.24.237.82), 30 hops max, 60 byte packets praha-4d-c1-vl55.masterinter.net (77.93.199.253) 0.499 ms 0.493 ms 0.544 ms ae-5-5.car1.KansasCity1.Level3.net (4.69.135.229) 149.079 ms CABLEONE.car1.KansasCity1.Level3.net (4.53.32.30) 146.839 ms 146.722 ms CABLEONE.car1