Skip to main content

Posts

Showing posts from August, 2015

local area network - IPv6 neighbour discovery problem

I would like to get IPv6 addresses with my LAN PCs behing a router with an IPv4 only ISP. I have a dd-wrt router. I've set up an IPv6 tunnel with Hurricane Electric ( http://tunnelbroker.net/ ). What works: I can ping6 from the router to IPv6 addresses, from outside the router can be pinged with its tunneled IPv6 address. I've also set up an IPv6 addr for the routers LAN iface and made radvd, forward rules and sysctls work for my LAN to have an address from the given /64 range. I get an IPv6 IP on Windows 7 and on Ubuntu too (not the fe80: one but the 2001: one). Windows 7 works fine from itself but almost always loses the first ping packet, but after it the others are fine. It's also strange but it's not a big problem. What does not really work: Ubuntu does not work at start, but if i ping the router's LAN ifaces IPv6 addr, then it works for half a min, then it stops. I figured out that if I ping the LAN iface the neighbour table gets a new line (ip -6 ne output):

Check zone assignment from Windows Explorer in Windows 8/Server 2012

I'm trying to determine whether a particular folder is assigned to the Intranet zone. On previous versions of Windows, I could look in the status bar at the bottom of a Windows Explorer window, like this: I'm trying to find the same information on Windows Server 2012, and having trouble. I know I can get the zone of a web site by loading it in IE and looking at the page's properties. But I can't use this to get the zone of a folder -- the folder opens right up in Windows Explorer (where I can't find the zone info). So -- is the zone information accessible from within Windows Explorer? Answer I too had this same frustration so I did quite a bit of digging awhile back and found something interesting. Read this first: http://blogs.msdn.com/b/cjacks/archive/2011/08/01/what-happened-to-the-zone-information-on-the-status-bar-in-ie9.aspx I know that my link references IE 9 instead of Windows Explorer, but every version since 9 up to 11 has had the status

domain name system - Can I Point about 30 DNS Zone NS entries to the same two IP Adresses?

I have about 60 Domain Names That I Am Creating Private name Servers For. Is it possible for me to just point ns1.mydomain.com, ns1.mydomain2.com, ns1.mydomain3.com etc. to the same nameserver IP address through the DNS Zone Record for each domain? And if I can do that do I have to put the original nameserver domain name in the DNS Zone SOA or can i juts map it too ns1.mydomain.com? Does any of that make sense? Other wise I am going to create like 30 nameservers on this one machine. Also I am using all C-Class IPs. I don't want to create 30 nameservers on this one machine and waste precious IP addresses. Any tips? Thanks in advance for your help. I also forgot to mention that I am trying to keep the fact that these are all on the same server Private so the SOA record for each domain needs to point to its own nameserver not the domain name for the real nameserver. Answer This is my conclusion from all the help and information given: Actual Name Servers: ns1.maindomai

ssd - Hardware specs for web cache

I am looking for recommendations for hardware specs for a server that needs to be a web cache for a user population of about 2,000 concurrent connections. The clients are viewing segmented HTTP video in bitrates ranging from 150kbps to 2mbps. Most video is "live" meaning segments of 2-10 secs each, of which 100 or so are maintained at a time. There are also some pre-recorded fixed length videos. How would I go about doing the provisioning calculation for such a server: What kind of HDD (SSD?), how many NICs how much RAM etc? I am thinking of using Varnish on Linux, all the RAM I can get my hands on, 2 CPUs with 6-8 cores each. Answer Will Varnish be able to share objects across sessions? In other words, is your architecture such that the object being loaded by the video stream client is /somestream/1h42m0s-1h42m10s/ , as opposed to /somestream/for/joeuser ? In that case, based on what you're describing, I'd skip the SSDs and just go for a ton of RAM; V

linux - Sending netmask and gateway/route with dhcp for ipv6

I have setup a DHCP server to assign ipv6 to my clients. However, both the "routers" and "subnet" OPTIONS are ignored. I have read that ipv6 does not get theese from the DHCP, rather from the router broatcast. However, I find this very strange. Why does it get a static IP from the DHCP, but the netmask from the router? Weird architecture. Anyways, I am wondering if there is a way to easily add these "missing" features to the DHCPv6 or any other work around. The end result should be that I can set the "ip6", "netmask" and "gateway" from one central place, based on the MAC address of the client, just like I can do with ipv4. I do not want to mess around with ipv6 auto-configuration and stateless stuff. The gateway problem I could solve in a "bad" way by setting stuff on the "up" event in the network configuration, but I can't find a way to change the netmask of the interface after it has been bro

bind - Problem with www version of domain in zone file

I am trying to create a zone file which points example.com and www.example.com to my server's IP address. I read the docs, but am still having a hard time with it. Currently, the non-www version works, but the www doesn't. I don't get an config errors when I start bind. Here is my zone file: $TTL 86400 @ IN SOA ns1.myserver.com. postmaster.myserver.com. ( 2010121801 ; serial number YYMMDDNN 1800 ; Refresh 600 ; Retry 864000 ; Expire 1800 ; Min TTL ) IN NS ns1.myserver.com. IN NS ns2.myserver.com. IN A 99.99.99.99 www IN CNAME @ Answer Why don't you just make an entry like www IN 99.99.99.99

exchange - Emails are going to spam

Hey guys, I am currently running exchange 2010, I have implemented SPF record, and tried to implement dkim/domain keys using domain sink, but it doesn't seem to work. The problem I am having is that all my emails go to spam, whenever I email some one whether it is msn/yahoo/gmail. For Msn i fixed it, since I subscribed to senders framework program. here are the orignal copies of Gmail and yahoo: Yahoo: From Sami Sheikh Wed Jan 27 14:15:51 2010 X-Apparently-To: sunny_3000ca@yahoo.ca via 98.136.167.166; Wed, 27 Jan 2010 06:19:52 -0800 Return-Path: X-YahooFilteredBulk: 67.55.9.182 X-YMailISG: 58M0TdIWLDvbv_d_qz4ABPsuq0Fmn1fLYMy08ZnNKPgA1aH3sVNx_KKFsiBK8ZOTBVDwBVnpTvRNkuTZc2UDsNMbj6nV9hfE43MQz3tXRV3.rh62wcp4oqT8AuzKKU5JSxU5g2AH4NzOmT5nGNiRyNEi6xazlMZTDm0rnfWbVECGV4RHzwM1TEadla6Bq_itel6hNinq_6MnPRxu2vX_fddmlCAG1Fi6X0ivjkKPqSr..MvpO8MnlTQTZZjRSoxLZUOqg0vjTPEPary5d_xf3MaS6IsRIScPMMk- X-Originating-IP: [67.55.9.182] Authentication-Results: mta1066.mail.mud.yahoo.com from=; domainkeys=neut

Ubuntu Server Hotswap RAID 1: Hardware or Software?

I'm building new servers for a project I'm working on. They will all run Ubuntu Server x64 (10.04 soon) and require a RAID 1 hotswap configuration (just two drives) to minimize downtime. I'm not worried about Raid performance. The server hardware will have plenty of CPU power, and I'm only doing a RAID 1. My only requirements are: Everything, including the OS, must be mirrored. There must be no down-time when a drive fails. I need to be able to swap out the failed drive with another and have the RAID rebuild itself automatically (or maybe by running a simple script). I'm wondering if the built-in Ubuntu Software RAID can handle this, particularly the hotswap part. 10.04 looks promising. I'm considering buying the 3Ware 9650SE-2LP-SGL RAID controller, but with the number of servers we're purchasing, that would increase the total price quite a bit. Any advice at all would be appreciated. Thank you. Answer I have hot swapped drives using th

best practices - Server room wiring questions

I'm looking at rewiring our server room to be more presentable, organized, and easier to troubleshooting. Right now we currently have 4 HP racks (don't know the model right now). We do not have a building UPS so we are placing UPS's in each rack (10K, 5K). This is causing a lot of mess but I unfortunately do not have another way to do it. We also do not have raised floors. One of my racks contains the firewalls, network switches, and fiber switches. Currently we are running cables from our main switch to all of the servers / SANs in the other racks (directly). The other racks contain mostly servers with one that also has a SAN in it. Here are my questions: Should I place a patch panel in each rack and run cables from our main switch to the patch panels? Then connect each server to the patch panels that are in their own rack? Right we have the air conditioning pointed down the aisle where the front of the racks (preparing for hot aisle / cold aisle setup when we expand). Rea

linux - CentOS root user crontab for mysqldump

I am using CentOS 6.6. I am trying to set up a crontab. I made an .sh script which runs perfectly when executed manually. The command is following: mysqldump --skip-lock-tables --single-transaction --hex-blob --flush-logs --master-data=2 -u root -p'password' database1 > database2.sql However, when I tried to set it up in /etc/crontab file, it won't run. Here are the contents of crontab file. SHELL=/bin/bash PATH=/sbin:/bin:/usr/sbin:/usr/bin MAILTO=root HOME=/ # For details see man 4 crontabs # Example of job definition: # .---------------- minute (0 - 59) # | .------------- hour (0 - 23) # | | .---------- day of month (1 - 31) # | | | .------- month (1 - 12) OR jan,feb,mar,apr ... # | | | | .---- day of week (0 - 6) (Sunday=0 or 7) OR sun,mon,tue,wed,thu,fri,sat # | | | | | # * * * * * user-name command to be executed * * * * * root /home/user/public_html/default1.sh Also, I would like this script to execute in directory /home/user/public_html My scrip

heroku - Redirect to HTTPS and Apex Domain with Nginx location Configuration

I would like to force HTTPS and the apex domain (e.g. https://example.com ) in my application through nginx configuration using location blocks. I currently have the following nginx_app.conf file (which works with both the apex and the www subdomain, and both http and https): location / { try_files $uri @rewriteapp; } location @rewriteapp { rewrite ^(.*)$ /app.php/$1 last; } location ~ ^/(app|config)\.php(/|$) { # fastcgi_pass directives go here... } To force the apex domain and https, I tried using if-statements as follows, checking for the $scheme and $host variables, but I get an error that the page is not redirecting properly. I also added an HSTS directive . location / { if ($scheme = http) { rewrite ^/(.*) https://$host/$1 permanent; } if ($host = www.example.com) { rewrite ^/(.*) https://example.com/$1 permanent; } try_files $uri @rewriteapp; } location @rewriteapp { rewrite ^(.*)$ /app.php/$1 last; } location ~ ^/(app|config)\

linux - Unusually high dentry cache usage

Problem A CentOS machine with kernel 2.6.32 and 128 GB physical RAM ran into trouble a few days ago. The responsible system administrator tells me that the PHP-FPM application was not responding to requests in a timely manner anymore due to swapping, and having seen in free that almost no memory was left, he chose to reboot the machine. I know that free memory can be a confusing concept on Linux and a reboot perhaps was the wrong thing to do. However, the mentioned administrator blames the PHP application (which I am responsible for) and refuses to investigate further. What I could find out on my own is this: Before the restart, the free memory (incl. buffers and cache) was only a couple of hundred MB. Before the restart, /proc/meminfo reported a Slab memory usage of around 90 GB (yes, GB). After the restart, the free memory was 119 GB, going down to around 100 GB within an hour, as the PHP-FPM workers (about 600 of them) were coming back to life, each of them showing between 30 and

ubuntu - Non-heap memory leak JVM

I have a glassfish v4.0 set up on Ubuntu server running on oracle java virtual machine and jvm process resident memory size (got via "top" command) grows up until jvm doesnt have memory to create new thread. What I have: VPS Server with 1Gb of ram and 1.4GHz processor (1Core) Ubuntu Server 12.04 Java(TM) SE Runtime Environment (build 1.7.0_51-b13) Java HotSpot(TM) 64-Bit Server VM (build 24.51-b03, mixed mode) Glassfish v4.0 running my Java EE webapp VM runs with folowwing parameters -XX:MaxPermSize=200m -XX:PermSize=100m -XX:Xmx=512m (I can add all if relevant) What`s the problem: Ram usage (res memory) grows all the time, depending on usage 10-100m per hour until jvm cannot allocate native memory. What have i tried: I`ve lowered max heap space which only saves time until jvm crashes anyway I`ve attached plumbr ( https://portal.plumbr.eu/ ) which does not detect any memory leak in heap I have also set max perm size to lower value. I would like to have my JVM to be stable, as

ubuntu 14.04 - Why does crontab store my crontab under /var/spool/cron/crontabs/user instead of /etc/cron.daily

I wanted to create a crontab doing some job for me. Because my server isn't running 24/7 i decided to use anacron instead. Ok, so I took a short look into /etc/anacrontab : 1 5 cron.daily run-parts --report /etc/cron.daily 7 10 cron.weekly run-parts --report /etc/cron.weekly @monthly 15 cron.monthly run-parts --report /etc/cron.monthly I saw that anacron is already executing the daily/weekly and monthly crontabs. In my case I want to execute the command on a daily basis, so I focused on the first line where I can clearly see that anacron is executing all scripts in cron.daily every day with a delay of 5 minutes. So I discarded the idea of creating an anacrontab or more likely an entry in /etc/anacrontab and intended to create a crontab which is placed in cron.daily because anacron will execute every script inside this folder anyway. Now when I create a "daily"-crontab with: crontab -e regarding to man crontab the cronta

active directory - Convert SBS 2003 domain to Server 2008 R2 Standard - will this work?

I have been tasked with converting an SBS 2003 domain to Windows Server 2008 R2 Standard. The customer plans to expand past the limitations of SBS, and does not use Exchange Server at all. Here's the current set up: City A: This is the client's original first location. There is one server here running SBS 2003. It performs domain controller and fileserver roles, and has all the FSMO roles because it's SBS. City B: The client set up a branch at this location with a server running Windows Server 2008. It also acts as a domain controller and fileserver. The client wants to move away from Small Business Server, so they have decided to upgrade the server in City A to Server 2008 R2 Standard. It will continue to provide the same services it is currently providing. They have opted not to purchase new server hardware, so the upgrade must be "in-place". I am seeking feedback on the following upgrade plan: Back up files and system state from SBS server using ntbackup Shut

ntpd - Things to consider when running public NTP servers

So, it recently dawned on me that since I have 3 GPS clocks in my network, I could, technically, give back a little and serve time to the rest of the world. So far I've not quite seen any downsides with this ideas, but I have the following questions; Can I virtualize this? I'm not going to spend money and time on standing up hardware for this, so virtualization is a must. Since the servers will have access to three stratum 1 sources, I can't see how this can be a problem provided the ntpd config is correct What kind of traffic do a public NTP server (part of pool.ntp.org) normally see? And how big VMs do I need for this? ntpd shouldn't be too resource intensive as far as I can gather, but I'd rather know beforehand. What security aspects are there to this? I'm thinking just installing ntpd on two VMs in the DMZ, allow only ntp in through the FW, and only ntp out from the DMZ to the internal ntp servers. There also seem to be some ntp settings that are recommende

apache 2.2 - How do I configure Apache2 to allow more than one SSL Certificate on the same IP?

Lets say I have two sites that are currently configured on the same box in VirtualHost blocks . How do I configure Apache so that I can put both sites under two different SSL certs? http://a.com => https://b.com http://b.com => https://a.com An answer containing a link to a good tutorial on setting this up would be fantastic. Thanks. Answer You don't. At least not yet, not on live web servers. SSL negotiation, key exchange, sending of the certificate and all that, is done before the request is transmitted -- that is, before the server even knows which site the client is trying to connect to. The site has to serve the right certificate, though, or it will get flagged by the browser as fraudulent. So, SSL requires one certificate per IP/port combo. No way around that, short of changing SSL itself. If you want multiple SSL sites with different certs on the same port, you'll need different IPs for each. (Although SSL sites are typically on port 4

web server - Can't FTP to /var/www/html on Ubuntu (groups set...)

So I'm having some issues with vsftpd. I need the webroot to be in /var/www/public because I have multiple users accessing it, so I can't put it in a users directory. Now that directory has "chown -R www-data:www-data" set and I've verified with ls -la, it's all owned by www-data. Doing groups on the user that I'm FTP'ing with returns "username sudo www-data" . If i try to FTP with that user with basedir as /var/www/public it shoots me "No permission" in Dreamweaver and "Failed to change directory" in Cyberduck, which I assume is a permission problem as well. How do I fix this? Again, user directory != an option. Here are the permissions leading up to the folder: /var - drwxr-xr-x 13 root root 4096 May 6 19:52 var /var/www - drwxrwxr-x 4 www-data www-data 4096 May 19 20:01 www /var/www/public - drwxrwxr-x 7 www-data www-data 4096 May 19 20:01 public EDIT: I could fix this by using ProFTPd with root privs, that is pre

raid - How do you connect more than 4 drives to a LSI 9361-4i MegaRAID controller?

I’m fairly new to proper hardware RAID having only used it in pre-built server machines before, but on the recommendation of some friends who work for datacentres I’ve bought an LSI 9361-4i MegaRAID controller to install into my main storage box at home as the Intel RST configuration I currently have set up locks up all I/O on the machine entirely whenever I try to write something more than 5MB to the discs until the operation has completed. According to the text on the box, the 9361-4i supports up to 128 drives, however there is only one mini SAS HD port on the card itself, so as far as I can work out I can only connect four devices to the controller via this port. My question(s) is: What additional hardware or cables do I need to be able to connect more than four devices to a controller of this type? Should I get one or more expansion modules to connect via the mini SAS HD port using an x4 cable? Do these need to be specific cards for it to work? Also, how would this impact the band

Reverse SSH tunnel: how can I send my port number to the server?

I have two machines, Client and Server. Client (who is behind a corporate firewall) opens a reverse SSH tunnel to Server, which has a publicly-accessible IP address, using this command: ssh -nNT -R0:localhost:2222 insecure@server.example.com In OpenSSH 5.3+, the 0 occurring just after the -R means "pick an available port" rather than explicitly calling for one. The reason I'm doing this is because I don't want to pick a port that's already in use. In truth, there are actually many Clients out there that need to set up similar tunnels. The problem at this point is that the server does not know which Client is which. If we want to connect back to one of these Clients (via localhost) then how do we know which port refers to which client? I'm aware that ssh reports the port number to the command line when used in the above manner. However, I'd also like to use autossh to keep the sessions alive. autossh runs its child process via fork/exec, presumably, so

vpn - pptpd server does not route internet traffic?

I have set up a pptpd server on my computer and clients can connect to it successfully. I have enabled ip_forwarding in /etc/sysctl.conf and added the following rule to my iptables to masquerade the traffic. iptables -t nat -A POSTROUTING -s 192.168.0.0/24 -j MASQUERADE And there is no other restricting rule in my iptables. my pptpd ip configuration is : localip 192.168.0.1 remoteip 192.168.0.2-254 and my local ip range is 192.168.1.0/24 the problem is my clients can not access internet via my server, is there anything else i should have done ? (both my server and clients are on the same local network) Answer You are doing it wrong. Using a VPN here gives you no additional security or benefit. Get rid of it. Simpler is always better. If you're trying to enforce the rule that users have to be "logged in" to get out of your network, use a proxy. For example, Squid & Squidguard.

linux - Authenticate CVS users against Active Directory

I have a mixed Linux/Windows software development environment where the Linux clients are migrating to a system where they are able to authenticate against Active Directory. (That part I figured out) Our lab is currently using CVS to conduct version control on our source code. In the migration, we will need users to be able to authenticate to our CVS server. I have it planned such that when the migration occurs, we will set up the CVS server to also authenticate users against AD. Unfortunately, I do not have a lot of experience with CVS. Is this task even possible? From what I understand, it can be set up to authenticate users based on the local users on the system. However, since the actual users won't have their credentials stored locally on the server (as it's pulling them from AD), is it possible to point CVS to rely upon pam for authentication? I have read about accessing CVS over SSH with user credentials. Would that be a requirement for this to occur? If so, how

networking - Loopback to forwarded Public IP address from local network - Hairpin NAT

This is a Canonical Question about Hairpin NAT (Loopback NAT). The generic form of this question is: We have a network with clients, a server, and a NAT Router. There is port forwarding on the router to the server so some of it's services are available externally. We have DNS pointing to the external IP. Local network clients fail to connect, but external work. Why does this fail? How can I create a unified naming scheme (DNS names which work both locally and externally)? This question has answeres merged from multiple other questions. They originally referenced FreeBSD, D-Link, Microtik, and other equipment. They're all trying to solve the same problem however. Answer What you're looking for is called "hairpin NAT". Requests from the internal interface for an IP address assigned to the external interface should be NAT'ted as though they came in from the external-side interface. I don't have any FreeBSD familiarity at all, but read

Do all mirrored pairs in a RAID 10 configuration need to be homogeneous in size?

I was planning to expand an existing raid 10 array composed of four 1TB disks with the addition of two 2TB drives. Then I wondered whether, despite the configuration of RAID10 (striping of mirrored sets), the new pair would see only half of its size used. The question is, therefore, whether each mirrored pair in a RAID10 configuration needs to be identical in size or whether each pair might be different in size and yet maintain their entire capacity available for data storage. Answer If any disk in your array is larger than the array size of the other disks, the excess space cannot be allocated to the array. You can however use the excess space in another array, if the excess space is the same size or larger as the additional array. So in short, yes, you are correct: Only half the space on the 2TB disks will be used. BUT, you can use that remaining 1TB per disk to create a new RAID1 array on that space so that its space does not go wasted.

Windows folder permissions, why is this happening?

On my Windows server I have a share called 'Data', inside data are 3 child folders called 1, 2 and 3. The NTFS permissions are as follows Data (The AllStaff group has Modify here, Admins have full control) -folder1 (inherits parent permissions) -folder2 (The AllStaff permissions have been removed and 2 non-admin users are added with Modify permission) -folder3 (inherits parent permissions) My problem is that everyone can still read and write into folder2, eventhough looking at the NTFS permissions the folder is not inheriting perms from the parent and onlt 2 users should have access (plus admins), yet everyone can access it! I decided to start over and leave the AllStaff group with Modify permissions on folder2 but check the 'Deny' box for each type of permission, I gave the 2 users Modify, now the 2 users that should have access can't get in, they are members of the AllStaff group so I can see why this would be. Can someone explain why I can't achive my goal? B

Visually identifying Dell disks on Solaris

I have a Dell PE1950 running the latest OpenSolaris, connected to a Dell MD1000 enclosure with 15 disks in it. I am not using PERC to control the disks, instead I use a simple SAS 5/E (LSISAS1068) controller that exposes the raw disks so we can use ZFS RAID instead of hardware RAID. It all works very well, but I have one worry about the time when we need to replace one of the disks for any reason. When I used PERC, it had the capability of turning the error led on the disk if something went bad, and also gave me a way to manually blink the led should I want to physically locate it for any reason. However, now when I use the plain SAS connection it looks like these capabilities are inaccessible, and the only way to identify the disk is by guessing what it is from the device number (which I find very risky), or shutting down the whole system, pulling the disks one by one and comparing the serial#. Both options are, of course, not acceptable. I would like to know if there is any way that

Correct way to setup linux company share folder with usable permissions

I've just started working for a small company that has a globally shared folder run from an Ubuntu file server (Raid 5, etc) shared by Samba. The workstations (Centos) are all fstab setup to mount this samba share in their root file system -> /Data/ . The client fstab configs : //10.1.1.3/DATA01 /DATA01 cifs username=username,password=password 0 0 The samba configuration for the share : [DATA01] path = /DATA01 writeable = yes Before I started the boss had decided he'd had enough of unix permissions and user mismatch "problems", so every workstation and server has the same user and group name (the current project name) - we'll call this project . He'd also made sure that the User UID on every machine was identical - i'm not sure how important this is at a practical level. So in short, a ls -l /Data results in items that are chowned by project : project . This is what I've just walked into. My job here is to administer all of this, but i'll ad

domain name system - How to use CNAME to external hosts on local DNS Server?

I have a domain mydomain.com and I'm using an external DNS Server to resolve some names like www.mydomain.com and webmail.mydomain.com on Internet. Now, I need to create a rule, only on my LAN, to resolve newserver.mydomain.com to newserver.cloudapp.net (a new server hosted at Azure). At first I tought, "I can create a CNAME entry on my local DNS Server and the job is done!" but to do it I had to create another zone to resolve *.mydomain.com and all went wrong. My DNS server was able to resolve newserver.mydomain.com but all other URLs were unavailable... of course, my local DNS server became "authoritative" for that domain. I'm using Windows Server 2012, and I only want to configure my DNS Server to do this: Resolve this entry: newserver.mydomain.com CNAME newserver.cloudapp.net and *.mydomain.com still resolving on external DNS Server. How can I do that? Thanks in advance! ;)

spam marked - Why am I getting 500 error while trying to send emails to some domains?

Recently I've been seeing the following error while trying to send emails to some domains: Final-Recipient: name@example.com Original-Recipient: name@example.com Action: failed Status: 5.0.0 Remote-MTA: dns; mail.example.com Diagnostic-Code: smtp; 550 "The mail server detected your message as spam and has prevented delivery." Neither my IP address nor my domain is blacklisted. The email is a simple test email without attachments or links. What can I do or where can I get more information about this issue? Thank you

routing - Amazon AWS: SSL in our testing environment

My company has a running business web service on AWS which uses a wildcard SSL certificate (works for *.example.com). Multiple partners have subdomains and need SSL ( https://partner1.example.com , https://partner2.example.com , etc.) and we serve either HTML or JSON responses to them. Our production site has a web-facing ELB that has our SSL certificate installed. THe ELB balances and terminates SSL to a cluster of production server instances within our VPC. Our web app knows how to serve the correct partner HTML/JSON based on the sub-domain name. I want to replicate this for our test environments (e.g. QA, Staging, Demo) as simply as possible. But test and production environments can't be the same server instances; I need to know that I won't kill production if I mess up on our test environment. Ideally, the same ELB that handles production traffic could somehow route traffic to my test servers, perhaps if they used a known IP address or DNS subdomain? Am I correct that

amazon web services - AWS Elastic Load Balancer flags environment as Red because TLS enabled on EC instance

We are using AWS Elastic Beanstalk with Application type Load Balancer to deploy a .NET applications. In the beginning we had a wildcard cert that we used on the load balancer level. The actual EC instance and the IIS on there did not handle TLS traffic since the load balancer stripped connection down to non-TLS. Everything was fine. However, later we needed to install an SSO (service provider) tool on the instance and this tool requires TLS. We ended up, in addition to the cert on the load balancer creating a Lets Encrypt cert on the EC instance level. What ended up happening after this, is AWS keeps flagging all our instances as having health of Red because load balancer keeps trying to request http://localhost:443 (or http://IP:443 ) and these are not coming back. I tried using a self signed cert bound to localhost on the EC instance, but this did not work because (i think) LB has to trust this cert before it will receive a 200 back. How to handle a situation like this?

domain name system - Custom Nameserver

Ok, I've searched for the past 6 hours and can't figure this out. I have several domains, in shared hosing I set the nameserver as ns1.my-hosting-company.com & ns2.my-hosting-company.com Now I want to setup a VPS so I created a domain my-custom-name-server.com and added glue record to a static IP so :- ns1.my-custom-name-server.com == 1.2.3.4 ns2.my-custom-name-server.com == 1.2.3.4 Now I want to use this name server for all my domains example.com example.net example.tld When I add the nameserver for example.com as ns1.my-custom-name-server.com it isn't working. The reason for this setup is if I move to a different server I just need to update ns1 & ns2 of my-custom-name-server.com Update Finally !!! it worked.. oh the joy when it worked ;-) This is what got me to the solution. INPUT Port 53 needs to open. I was using bind9 listen-on port 53 in named.conf should have been any Vanity Server For professionally managed this is solution I was looking for :- https

What algorithm does Amazon ELB use to balance load?

I found this in the official ELB documentation By default, a load balancer routes each request independently to the application instance with the smallest load. but an article on Newvem says that ELB supports only Round Robin algorithm Algorithms supported by Amazon ELB - Currently Amazon ELB only supports Round Robin (RR) and Session Sticky Algorithms. So which one is it? [1] http://docs.aws.amazon.com/ElasticLoadBalancing/latest/DeveloperGuide/US_StickySessions.html [2] http://www.newvem.com/dissecting-amazon-elastic-load-balancer-elb-18-facts-you-should-know/?lead_source=popup_ebook&oid=00DD0000000lsYR&email=muneeb%40olacabs.com Answer It's request count based for HTTP(S), round robin for other. http://docs.aws.amazon.com/ElasticLoadBalancing/latest/DeveloperGuide/TerminologyandKeyConcepts.html#request-routing Before a client sends a request to your load balancer, it first resolves the load balancer's domain name with the Domain Name System