Thanks for the continued interest in the project!
I thought about this to the extent of whether or not I thought Google’s PublicDNS services were capable of handling significant traffic or not. From their docs:
In Google Public DNS, we have implemented several approaches to speeding up DNS lookup times. Some of these approaches are fairly standard; others are experimental:
- Provisioning servers adequately to handle the load from client traffic, including malicious traffic.
- Preventing DoS and amplification attacks. Although this is mostly a security issue, and affects closed resolvers less than open ones, preventing DoS attacks also has a benefit for performance by eliminating the extra traffic burden placed on DNS servers. For information on the approaches we are using to minimize the chance of attacks, see the page on security benefits.
- Load-balancing for shared caching, to improve the aggregated cache hit rate across the serving cluster.
- Providing global coverage for proximity to all users.
Based on this and the other things in their docs, I do believe they are capable and have established enough capacity to deal with any load the Rust community could throw at it. This is definitely a valid concern, though.
Criticize away , seriously, I appreciate the feedback. I think you have raised a very valid concern. If this becomes wildly successful, that would be pretty amazing. Honestly, I just wanted to scratch an itch, successful or not, it’s been a great way to become more familiar with Rust.
Yes, I did consider this. To be clear I have no affiliation with Google or any interest in giving away private data for their data mining desires. That being said, I’m open to other ideas here to make it more explicit that you use the Google name servers. For reference, here is their privacy statement for the service:
What we log
Google Public DNS stores two sets of logs: temporary and permanent. The temporary logs store the full IP address of the machine you’re using. We have to do this so that we can spot potentially bad things like DDoS attacks and so we can fix problems, such as particular domains not showing up for specific users.
We delete these temporary logs within 24 to 48 hours.
In the permanent logs, we don’t keep personally identifiable information or IP information. We do keep some location information (at the city/metro level) so that we can conduct debugging, analyze abuse phenomena. After keeping this data for two weeks, we randomly sample a small subset for permanent storage.
We don’t correlate or combine information from our temporary or permanent logs with any personal information that you have provided Google for other services.
Finally, if you’re interested in knowing what else we log when you use Google Public DNS, here is the full list of items that are included in our permanent logs:
- Request domain name, e.g. www.google.com
- Request type, e.g. A (which stands for IPv4 record), AAAA (IPv6 record), NS, MX, TXT, etc.
- Transport protocol on which the request arrived, i.e. TCP, UDP, or HTTPS
- Client’s AS (autonomous system or ISP), e.g. AS15169
- User’s geolocation information: i.e. geocode, region ID, city ID, and metro code
- Response code sent, e.g. SUCCESS, SERVFAIL, NXDOMAIN, etc.
- Whether the request hit our frontend cache
- Whether the request hit a cache elsewhere in the system (but not in the frontend)
- Absolute arrival time in seconds
- Total time taken to process the request end-to-end, in seconds
- Name of the Google machine that processed this request, e.g. machine101
- Google target IP to which this request was addressed, e.g. one of our anycast IP addresses (no relation to the user’s IP)
I have this documented here;
This uses the default configuration. Currently this sets the google resolvers as the upstream resolvers. I’ve just updated this to say:
This uses the default configuration, which sets the Google Public DNS as the upstream resolvers. Please see their privacy statement for important information about what they track, many ISP’s track similar information in DNS.
Default::default I had documented this similarly with:
Creates a default configuration, using 220.127.116.11, 18.104.22.168 and 2001:4860:4860::8888, 2001:4860:4860::8844 (thank you, Google). I’ve updated this to include:
Please see Google’s privacy statement for important information about what they track, many ISP’s track similar information in DNS. To use the the system configuration see:
My reasoning for all of this is that I wanted an out-of-the-box solution that would work for most people. All of it can be overridden and changed. I’m open to making the system’s
resolv.conf default, but my hesitation there is that I currently don’t have support for reading the information out of the Windows Registry (or access to any machine to build that) and potentially other systems. See this issue for Windows: https://github.com/bluejekyll/trust-dns/issues/171. I’d love help on all Windows related configuration and builds as my only tool for supporting that at the moment is AppVeyor (I don’t run Windows in any capacity).
Again, I’m very open to changes where people see necessary. At the moment though, my preference is to make the library as easy to use as possible, which removing/changing the
Default implementations may make that harder.