domain namne not taking effect?

Hi, I recently signed up for an account that supposedly allows multiple domains. I have forwarded my domain ( to point to my pythonanywhere address, set up a CNAME record, and registered my private domain name with (in web console). But that last bit doesn't seem to do anything. When I access, the browser takes me to my pythonanywhere site, but the the browser's URL field shows rather than my domain

Something has to be wrong, because I could have achieved what I now have with the free account. Being able to use my own domain has to mean something more. Can you please help?

Murali PS: I mostly like this site in the few days I have been trying it out. I had never used Django before, but I was able to make progress in getting some examples to work. PAW is pretty awesome.

Also, I can't seem to send mail.

Another user posted in this forum that he could do: from django.core.mail import send_mail send_mail('test email', 'hello world', '', ['])

But I get "connection refused". I think it is trying to connect to port 25 on localhost (hansel-liveweb)

Thanks, Murali

It sounds like you've set up a redirect, and not a CNAME. When I dig, I get    3445    IN  A

and the same for Check with your domain name host. Also note, that because of a limitation in how CNAMES work, you can't have a CNAME for, you'll have to use or some other prefix.

For the mail thing, you need to configure your sending server in django settings, we don't provide an SMTP host. If you're using gmail, you'll need to include your username and password to connect to the Google SMTP servers.

Thanks Glenn!

I had tried CNAME, but I think the key was that I didn't use a prefix on my domain name.

I had figured out the mail problem.

thanks for prompt help, Murali

Note: this post has had a minor correction as outlined in the post immediately following it.

DNS can be a bit of a black art, but about 90% of people don't need to worry about 90% of it.

There are three types of records that most people need to worry about:

  • A record - These are the primary DNS records which turn a hostname (which people can understand) into an IP address (which is what computers actually use to talk across the Internet).
  • CNAME record - This is effectively a pointer to another record, conceptually a little like a shortcut under Windows or a symbolic link under Linux/Unix. It's a form of redirect, but it happens entirely in the DNS layer (I'll get to that in a second).
  • MX record - These records show how email for the domain is handled - you might not need to worry about these if your DNS hosting provider has sorted out email for you (or if you're not using email on your domain).

There are many other types of DNS records, but mostly people don't need to worry about them (with the possible exception of AAAA which are like A records but for IPv6 addresses - if that means nothing to you, ignore it).

If you're curious, CNAME stands for canonical name because it's creating an alias under another name - so the target of the CNAME record is intended to point to the "proper" name for the domain. This is kind of misleading in some cases because of the way CNAME records can be used, however - the name just indicates their original intended purpose.

On a standard website, therefore, you might have an A record for declaring it to have IP address and a CNAME record for pointing to The end result to the user is that they can use either name and get the same IP address because the DNS system underneath is doing the lookup for them. This is important because it means that, say, in a web browser the URL still shows up as in the address bar - the web browser itself doesn't necessarily even know that the two names point to the same IP address.

This is completely different to a HTTP redirect, for example. In this case the first DNS name lookup returns an IP address and the browser fetches the page. The result is a redirect, however, from a webserver and not from DNS at all. This means that the browser will then follow the redirect and do another DNS lookup for the new URL. I know some people can sometimes get confused between these two because they both have similar results, but they work in very different ways and the HTTP redirect causes the address in the browser's URL bar to be different (there are lots of other potential differences, but let's not complicate things here).

Now, in the case of PAW the CNAME mechanism is being used to make a subdomain of yours (e.g. point to a completely different domain ( in this case). So what happens with this setup is:

  1. User enters a URL at into their browser.
  2. Browser performs a DNS lookup to ask for the IP address of this domain.
  3. The DNS system looks up your DNS hosting provider and finds the CNAME record pointing to
  4. The DNS system then repeats the lookup on and this time finds an IP address (e.g.
  5. Since there's now an IP address the DNS system's job is done - it returns this address to the browser.
  6. The browser then opens a TCP socket to the IP address and port 80 (if sockets, TCP and ports are nonsense to you, don't worry).
  7. The browser sends the rest of the URL (i.e. the bit after down the connection to the webserver at PAW and it responds with the correct page as served by your web app.

Voila. Hopefully that's been slightly useful and/or interesting. I think it's always easier to get these things working if you have a general idea of how it's working under the hood. If you have any more detailed questions on DNS, or networking in general, do feel free to ask.

One of these days I'll get around to writing my "How The Internet Works For Beginners Who Are Just Curious" page when I have time... Speaking of which, good grief, I just found this entry on h2g2 I started in 2003! It was supposed to be for complete beginners. I'll have to re-read it again sometime and see wrong I got it! (^_^)

Thanks, Cartroo -- that's a great primer! One correction: we ask people to create their CNAMEs pointing to, not just In the light of your explanation, perhaps I can explain why that's useful, and why we don't just ask people to create an A record pointing to our main IP address.

Right now, the number of sites hosted on PythonAnywhere is small enough that everyone can be behind the same nginx reverse proxy -- that is, everyone's site actually sits at the same IP address, but the hard work of serving sites can be farmed out by nginx to other servers so we can scale up the number of sites we support just by adding more servers behind nginx.

But let's imagine some time in the future, when we have tens of thousands of domains hosted at PythonAnywhere. (We're getting there.) We can't have them all pointed at the same IP address, because even if we run a super-fast nginx server on that IP address to delegate the work of each website to a separate web server, we still wind up with all of the traffic for all of the domains going through the same network port, which obviously won't work, however fast it is.

So, we want to be able to separate out the traffic for each of our users so that we can point, say, user giles' traffic to one computer and user donthireddy's traffic to another. giles might share an IP with a bunch of other users, and likewise donthireddy with a different group of other users, but they each have a different IP.

The thing is, we'll need to adjust who uses which IP dynamically; that is, we might want to switch donthireddy's site from one IP address to another as we balance out the load between our servers. But we don't want donthireddy to have to log in to his DNS provider every time we change the IP address, because that would be a pain for everyone.

So, right now and both point to the same IP address. That IP address is controlled by us, using our own DNS servers. But with just a slight change, we can switch the names over so that they point at different IPs. Which means that if giles has a domain with a CNAME pointing at his PythonAnywhere subdomain, and likewise donthireddy has one pointing at his own subdomain, we can make a simple DNS change and both of their domains are suddenly being served by a different proxy, so we can scale up without having to bother both of them and ask them to change their DNS settings.

It gets even better than that, though. Because CNAMEs give us control over the ultimate IP address that our users domains wind up at, we can also route different requests to different IPs. This means that for a particular domain we can start sending half the requests to one IP and half to another (using something like round-robin DNS). This means that at some point in the future we can start making it easy for people to run high-availability sites served from multiple IPs, which is great for load-balancing and reliability.

Hope that's reasonably clear and helpful!

Ah, thanks for the clarification, I've corrected my post. Apologies for the mistake - this sort of thing can be tricky enough without people like me spreading minor misinformation, no matter how well-intentioned! (^_^)

Your explanation was certainly clear (to me at least!) and makes a lot of sense. Couple of things I'll mention because I've been bitten by them in the past...

Firstly, the idea of load-balancing on the basis of customers is generally a great one, especially as it allows you to partition your back-end services quite nicely (e.g. separate storage arrays for subsets of customers). However, I've run into cases where a single customer has the potential to become responsible for the majority of the traffic and the scheme tends to fall down a little at that point.

In the case of PAW I very much doubt this would ever be the case as it's a much more "horizontal" service than the one on which I was working. If it did, I suspect you'd want to negotiate different contract terms with the individuals involved anyway, which makes the concern a bit moot. On top of that, you could easily have a round-robin-DNS approach for a single user if it was really necessary. I just wanted to mention it in case it's worth having a brief think whether it's likely enough that it's worth having a "battle plan" for how you'd deal with it should it ever arise (not that someone like me would ever be bitten by the "oh that'll never happen to us" fallacy or anything... Ahem, moving swiftly on...).

Secondly, and probably a little more relevantly, if you plan to switch users around like that then you need to pay careful attention to your DNS TTL. Currently it looks like the TTL on your records is 86400 (24 hours), which is entirely reasonable right now since everything is hosted together. You'd obviously want to reduce that down quite a bit, however, if you were going to start using DNS lookups for load-balancing and/or failover. Also, don't underestimate the number of broken DNS clients (and particularly caching proxies) out there who'll just plain ignore the TTL, so even if you've switched a user over you'd still want to make sure the back-end at the old address was capable of serving their requests for some time, I'd say.

You've probably thought all this through already, but I figured it was worth mentioning now just in case, so there's plenty of time to consider the issues before the traffic load makes them more important.

Thanks, Cartroo -- that's a great checklist for when we start making those changes, especially the point about broken DNS clients. Given the scaling problems we've been having recently with the console server (we think we have a quick fix that will buy us some time, BTW), we're taking planning for this kind of thing very seriously now...

That's good to hear, I wish more services would take scalability more seriously early in their development. (^_^)

'Course, commercial realities being what they are I can understand that other stuff is always higher priority in the early days, but it's always worth having a plan of action ready if there's a sudden surge in demand. Believe me, I've been in the position of having to iron out bottlenecks in distributed systems to fix very urgent issues and it can be like playing Whac-A-Mole in the dark. It's always worth figuring out your bottlenecks and how you'd go about scaling them even if you don't do it straight away.

As an aside, one easy way of coping with broken DNS clients is to tie both your DNS and HTTP layers into the same load-balancing back-end such that the DNS takes care of most users and you can use HTTP redirects to mop up the rest. At least then the borked clients only cause the overhead of a redirect, rather than unbalanced connections between servers. Also, you'll really want to make the system automatic before it gets too big - having to manually load-balance gets tedious really quickly!

One other tip: if at all possible plan so that you can horizontally scale any component at will. That will increase costs, of course, but at least you can cope in the short term and optimising to reduce the number of components required becomes a cost-saving internal issue rather than customer-visible. This reduces the pressure in such changes, which is always nice.

Still, I'm not trying to say that's all super-urgent, just that you really want to think about it early.