Disney is Doing Cross-Site Authentication All Wrong
Disney runs quite a few properties including disneyplus.com, hulu.com, espn.com, abc.com, and a bunch of obviously Disney sites like shopdisney.com, disneyworld.disney.go.com, and disneycruise.disney.go.com. They have a centralized authentication system so all of these sites can use the same email address and password to log in.
It has a couple major problems though:
- It isn’t obvious that the login is shared. They share a logo when logging in, but its not obvious to users that these sites share the same credentials. I wouldn’t expect that espn.com uses the same login as hulu.com and I know that Disney owns both of them! Also, password managers aren’t aware that the logins are tied together, so when you log in to one site and your password doesn’t work because you don’t realize they are shared, you end up resetting it. And then it broke your password for another site that you didn’t realize was connected
- Users can’t verify that a site is legitimate. It would be trivial for an attacker to create a fake Disney site and mimic the Disney login system to capture passwords. I actually noticed this because my wife was logging into a site for Disney gift cards and I seriously throught it was a scam
Disney should implement a shared login that uses a common login site (like login.disney.com) so that users can know that it is a legitimate Disney site. This fixes the issues above. Users can know that they trust login.disney.com. Password managers will use the same credentials. And it will be more difficult for attackers to mimic a site if users know that login.disney.com is the only legitimate place to log in
The post Disney is Doing Cross-Site Authentication All Wrong appeared first on Brandon Checketts.
Stop Validating Domain Ownership with @ TXT Records
Lots of services need to validate ownership of a domain. Especially for sending email or creating SSL certificates
Creating a TXT record at the domain root (@) is a common practice and I think it should be avoided. Many services like to request adding things to this same record. That creates several concerns:
- It leaks information about what 3rd-party services you use (or have used). This is a minor security issue, but is not necessary
- The process for adding multiple lines to a single records is inconsistent between various services, meaning that instructions have to be service-specific. Instructions for GoDaddy are different than on CloudFlare
- Most services don’t have comments on DNS records, and the names of the records are often not self-explanatory. You end up with many lines and don’t know which is for which service. To make matters worse, records are rarely removed when you stop using a service, so it becomes an ever-growing list
A better practice is to use either TXT or CNAME records for specific hostnames (ie: google-verification-randomstring.mydomain.com) that contain a verification string or hostname. This avoids all of the problems above. The name can’t be guessed, and each record is separate. And either the hostname or value should indicate what service the record is for. Having a random value like 25376de5f10046a853b1395e756cbf66 doesn’t help me know what service it belongs to (I’m looking at you AWS Certificate Manager?)
This is the kind of bloat you end up with when everybody uses TXT record, and when people add stuff who don’t know what they are doing.
01:21 $ dig -ttxt mydomain.com ; <<>> DiG 9.18.30-0ubuntu0.20.04.2-Ubuntu <<>> -ttxt mydomain.com ;; global options: +cmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 26429 ;; flags: qr rd ra; QUERY: 1, ANSWER: 12, AUTHORITY: 0, ADDITIONAL: 1 ;; OPT PSEUDOSECTION: ; EDNS: version: 0, flags:; udp: 65494 ;; QUESTION SECTION: ;mydomain.com. IN TXT ;; ANSWER SECTION: mydomain.com. 277 IN TXT "google-site-verification=qX5fQ3XXXXXXXXXXXXXXXXXXWvxBGAlVigFEW3nYzfU" mydomain.com. 277 IN TXT "google-site-verification=oAHQRYYYYYYYYYYYYYYYYYYYYYD447rpeYhE81wPD44" mydomain.com. 277 IN TXT "slack-domain-verification=sa2uZZZZZZZZZZZZZZZZZZZZZZZZZZZZZtTRsDOOS" mydomain.com. 277 IN TXT "google-site-verification=3LEWAAAAAAAAAAAAAAAAAAAA8GGyhpkv-Ge3qhaOIn8" mydomain.com. 277 IN TXT "facebook-domain-verification=zvyCCCCCCCCCCCCCCCCCCCCCC5lbhn" mydomain.com. 277 IN TXT "v=spf1 include:spf.mandrillapp.com include:_spf.elasticemail.com include:aspmx.pardot.com ~all" mydomain.com. 277 IN TXT "pardot885593=bd2638dff2ffffffffffffffffffffffffffffffffe46fd6c4dbffefa91" mydomain.com. 277 IN TXT "include:servers.mcsv.net ?all" mydomain.com. 277 IN TXT "include:_spf.google.com include:mailgun.org" mydomain.com. 277 IN TXT "mandrill_verify.tIcfQQQQQQQQQQQQQQQQqaQ" mydomain.com. 277 IN TXT "google-site-verification=rjDDDDDDDDDDDDDDDDDDDDDDDDDtzfGMuZKmt74DfQ0" mydomain.com. 277 IN TXT "brevo-code:a0aaaaaaaaaaaaaaaaaaaaaabebed7419" ;; Query time: 4 msec ;; SERVER: 127.0.0.53#53(127.0.0.53) (UDP) ;; WHEN: Sun Dec 21 01:22:25 UTC 2025 ;; MSG SIZE rcvd: 984I know they aren't using several of those services, but cleaning them up requires timely validation just to make sure
The post Stop Validating Domain Ownership with @ TXT Records appeared first on Brandon Checketts.
HTTP Archive New Leadership
I announced the HTTP Archive six years ago. Six years ago! It has exceeded my expectations and its value continues to grow. In order to expand the vision, I’ve asked Ilya Grigorik, Rick Viscomi, and Pat Meenan to take over leadership of the project.
The HTTP Archive is part of the Internet Archive. The code and data are open source. The project is funded by our generous sponsors: Google, Mozilla, New Relic, O’Reilly Media, Etsy, dynaTrace, Instart Logic, Catchpoint Systems, Fastly, SOASTA mPulse, and Hosting Facts.
From the beginning, Pat and WebPageTest made the HTTP Archive possible. Ilya and Rick will join Pat to make the HTTP Archive even better. A few of the current items on the agenda:
- Enrich the collected data during the crawl: detect JavaScript libraries in use on the page, integrate and capture LightHouse audits, feature counters, and so on.
- Build new analysis pipelines to extract more information from the past crawls
- Provide better visualizations and ways to explore the gathered data
- Improve code health and overall operation of the full pipeline
- … and lots more – please chime in with your suggestions!
Since its inception, the HTTP Archive has become the goto source for objective, documented data about how the Web is built. Thanks to Ilya, that data was brought to BigQuery so the community can perform their own queries and follow-on research. It’s a joy to see the data and graphs from HTTP Archive used on a daily basis in tech articles, blog posts, tweets, etc.
I’m excited about this next phase for the HTTP Archive. Thank you to everyone who helped get the HTTP Archive to where it is today. (Especially Stephen Hay for our awesome logo!) Now let’s make the HTTP Archive even better!