I chose the 512GB version, primarily because I anticipated that some of the software/games on the Vision Pro might feature more 3D scenes, which could consume more storage space. However, since I mostly use it in a Wi-Fi environment, the 1TB version seemed unnecessary.
Having never used other VR/AR products before (Google Cardboard excluded 😂), I didn’t find the Vision Pro’s display quality particularly outstanding. Instead, I found that the display quality of the Vision Pro cannot fully replace that of the Studio Display. Compared to other types of displays, it has these characteristics:
Resolution: A 5K display has a significantly sharper image and noticeably better color space for the naked eye. The clarity of the Vision Pro is roughly equivalent to that of a 2.5K display, with the display quality ranking: high-end displays >> Vision Pro > high-end projectors. Upon closer inspection, I could see the pixels in the Vision Pro. My vision is not perfect, being slightly nearsighted, but I could still discern pixels without corrective lenses.
HDR Effect: Thanks to micro-LED, the HDR effect is relatively outstanding, better than monitors and projectors, but the brightness of highlights cannot surpass that of the iPhone’s OLED or the MacBook Pro’s miniLED.
Adaptive Rendering: The above clarity comparison was made when adjusting the window to the size I typically use on a monitor. However, the windows in the Vision Pro are adjustable, including distance and size. Moreover, the physical movement of the glasses in the real world affects the display: as you get closer, the image appears clearer. If the retina display is @2x-@3x, then Vision Pro is @1x-@9x (9x is just a figure of speech). All system UI in the Vision Pro, including SVGs/text on webpages, are vector rendered. You don’t need to zoom in; just moving closer to the display area reveals incredible clarity in every small letter/vector graphic.
Brightness: It doesn’t seem to reach 500 nits. If you take off the glasses during the day, the real world appears significantly brighter. However, this does not greatly affect the experience.
The quality of the passthrough is far inferior to that of the UI display, with the camera resolution not being high, making the real world appear far from 4K quality. However, its exposure control is excellent, with no overexposure or underexposure, making it viable for daily life activities like eating. Yet, it’s not suitable for reading smaller texts (too blurry to read; for example, food ingredient lists or small text on a phone are unclear).
The perspective effect is just usable. I can wear it and walk around the house without any issues.
Initially, wearing the Solo Band for extended periods was uncomfortable, but after switching to the Dual Loop and adjusting it carefully, I found it significantly more comfortable than at the beginning, manageable for an hour or two. The Dual Loop has two adjustment points, and I recommend experimenting with it; too tight or too loose can be uncomfortable. Lying down with it is uncomfortable as it presses against the cheeks. The weight is definitely noticeable, and I hope it can be as light as glasses one day.
Overall, the control is quite convenient. If the control of a Mac’s touchpad is 10/10 and an iPad is 9/10, then the Vision Pro is already at 8.5/10 (excluding typing). Scrolling feels very responsive, with a high-refresh-rate feel, noticeably better than 60Hz. However, if you place your hand in a spot not visible to the camera (like under the table), that magical experience disappears, and you remember: ah, it’s the camera capturing my gestures.
Besides pinch 🤌 controls, direct button clicks are also interactive, bringing the window “closer” and then using your index finger to tap the air screen like on an iPad, which is quite useful for websites with small buttons, such as YouTube. This approach is more convenient for websites not well adapted, where buttons are close together.
The control experience for iPad-compatible apps is also quite good. Many software (including those Apple once advertised as supporting Vision Pro) are actually just iPad-compatible versions. Trying iPad versions of Lightroom, Paramount+, Maps, etc., I found the experience surprisingly good. The biggest issue remains the typing experience. Thinking back, the front-end scheduling, mouse pointer, and Apple Pencil’s Hover feature added to the iPad are amazingly similar to the interactions with Vision Pro, making iPad Apps that support these features perform well on Vision Pro.
My favorite method of typing is now using both index fingers to tap the keyboard. I’ve gradually gotten used to it, but it’s still not as good as the iPhone’s small-screen keyboard. The Bluetooth keyboard remains to be tried.
After using the Vision Pro for a while, whenever I see a screen in the real world that feels far away or improperly positioned, I instinctively want to 🤌 adjust it, only to realize I can’t 😂. I think this will become a subconscious gesture in the future, similar to scenarios described in Liu Cixin’s “The Three-Body Problem”.
The software feels like it’s at a late Beta stage for Apple, not quite a final version. First, it currently only supports input in English (US), not supporting any other language input (including Chinese), both for the keyboard and Siri voice commands. Also, I noticed that visionOS 1.0.2 does not support iMessage Contact Key Verification introduced with iOS 17.2. This was hinted at by Xcode: if you set the Minimum Deployment in Xcode 15.2 to 17.2, the software won’t run on visionOS 1.0.2. Thus, it can be concluded that visionOS 1.0.2 = iOS 7.1. This shows that Apple rushed the release of Vision Pro, perhaps because they felt visionOS 1.1 had too many bugs to release, leading to current functionality mismatches. For reference: iOS 17.1 was released on October 25, 2023, and iOS 17.2 on December 11, 2023.
There is no capability for external USB devices, which is a downside compared to the iPhone and iPad. Therefore, Vision Pro does not support YubiKey 5C NFC, as it lacks both a USB port and NFC. If you want to log into a website but have only configured YubiKey, unfortunately, you can’t log in. However, Passkey is supported.
Many are interested in extending the Mac screen. Personally, I find the extended screen experience far inferior to that of real monitors. For the same price, you can buy a monitor with much better display quality than Vision Pro. But if portability is your main concern, such as wanting a monitor on a plane or in a hotel, or if you value privacy, such as not wanting others to see your screen, then it has some use.
I find the display quality of Vision Pro slightly worse than that of mid-range 4K monitors (around $1000). There is also some latency, similar to the latency experienced when using an iPad as an extended screen. I would not use Vision Pro primarily for this purpose.
It’s usable but only recreates 90-95% of a person’s face. It’s somewhat in the uncanny valley. Having it is better than nothing, and I think using it for video conferences wouldn’t be too awkward.
Using it while on a vehicle is completely impractical, even with Travel Mode enabled, at least in a small car. First, the Vision Pro may mislocate due to the presence of windows, sometimes thinking you are moving quickly and other times failing to locate. But this could be considered a software issue. The next problem is physical: due to its weight, your face can feel extremely uncomfortable from the acceleration during bumps. Therefore, it’s not recommended during takeoff/landing/taxiing/turbulence, beware of facial pain.
Thus, the experience with an iPhone/iPad is better when riding. I will not use Vision Pro while riding in the future.
EyeSight is very limited, with low brightness and resolution. iFixit’s teardown revealed the reason: compromises had to be made to replicate the 3D effect of glasses. The replication of eyes must be three-dimensional, otherwise, outsiders won’t know who you are looking at.
I believe Vision Pro is nearing maturity, with the software already at the usability level of a final Beta version. There are relative shortcomings in software, but I think these will be quickly addressed in the coming months.
]]>I found out that the Alibaba Cloud Hong Kong server was unavailable after I woke up in the morning. At first I thought it was blocked by a mysterious force, but after seeing the alarm email, I found that the Hong Kong server was also unavailable in other foreign regions.
Then I tested it and found that the host can be pinged, but cannot SSH up, and the HTTP/HTTPS service is also unavailable. In addition, yangxi.tech website services were also affected. Then I logged in to Route 53 in AWS China to check the alarms, and found that there were no alarms on Route 53 (both my guozeyu.com and yangxi.tech are using Route 53 in China). Looking at the recent monitoring, I found that I had a configuration error on Route 53, which caused the program to not actually monitor Hong Kong Alibaba Cloud…so I modified the monitoring rules. At this time, I logged into the AWS international zone, and found that the Aliyun Hong Kong monitoring in the international zone was alarming normally. I found that it started at 10:49, and all monitoring points around the world were unavailable.
Since the alarm in the AWS China region has been changed to correct by me at this time, automatic downtime switching has been performed at the DNS level according to the rules; moreover, the DNS TTL I configured before is only 120 seconds, so all my The website should be fully back to normal by 11:20. If there were no faulty alert rules, my site would not have been down for more than 5 minutes in total.
Then I logged in to my self-built Observium monitoring, and saw that the machine of Alibaba Cloud Hong Kong was down without any surprise. But I found that before the crash, the server’s CPU was 100%. I wondered if a program on my own server was stuck and took up all resources, making it seem that HTTP/HTTPS/SSH services were unavailable.
It is already 11:30 at this time, and I still think this is my own problem. After all, I have not received any email or phone reminder from Alibaba Cloud. So I logged into the Alibaba Cloud background and tried to restart. At this point I don’t have the option to force shutdown. As a result, the shutdown command has been executed for 10 minutes, and the machine still does not turn off. Usually when I use GCP, if the shutdown command is executed for more than two minutes, it will automatically perform a forced shutdown, so it is outrageous to execute a shutdown of 10 minutes at this time. Then I tried to send a work order, but I failed to enter the work order and was transferred to the online customer service. After I described the problem to the online customer service, no one responded. Then I tried to send a work order again, but no one responded to the work order. Just got a bot reply saying that on shutdown, Windows will perform a system update, which sometimes takes 10-15 minutes. Please contact customer service after waiting 20 minutes.
After about another half hour, the machine finally shut down successfully, but after several attempts, the machine would not start up. Even if I hit start, the machine goes to “Stopped” state after a while. At this time, I still thought it was my own problem, not Alibaba Cloud’s problem. I thought the system was damaged due to the forced shutdown just now, so I tried to take a snapshot of the existing cloud disk and try to roll it back. But something outrageous happened again, and I found that I couldn’t take snapshots at all! The progress of the snapshot is always at 0%.
At 11:55, I finally received a reply to the work order: “Hello! Alibaba Cloud monitoring found that the equipment in a computer room in Hong Kong was abnormal, which affected the use of cloud products such as cloud server ECS and cloud database polardb in availability zone C in Hong Kong. Alibaba Cloud The engineer is in urgent processing, I am very sorry for the inconvenience caused to your use, if you have any questions, please feel free to contact us.” Only then did I realize that this outage might not be my fault, and I gave up trying to restore the service myself.
The reply did not provide a solution, nor did it have an expected recovery time. It can be said that it just told me “this is a problem with our Alibaba Cloud”, and it did not substantially help restore the service.
The most worthy complaint about this incident is that I have never received any relevant notification from Alibaba Cloud. Until now, I have not received any reminder from Alibaba Cloud about the service downtime in emails, text messages, and on-site messages. I believe that Alibaba Cloud has a lot of monitoring. I firmly believe that Alibaba Cloud already knew that there was a problem with my service and other people’s servers long before I found out that there was a problem with my service. But I don’t get any notification. The only thing that told me about the problem with Alibaba Cloud was that I sent a work order and asked it myself.
I think it is necessary for Alibaba Cloud to send relevant reminders to all affected and potentially affected customers. After 8 hours of continuous downtime without recovery, Alibaba Cloud still did not take the initiative to notify customers. It is really hard for me to doubt whether Alibaba Cloud deliberately did not notify and tried to cover up the problem.
I Will. As the largest cloud service provider in China and one of the top cloud service providers in the world, I have no doubts about Alibaba Cloud’s technical strength. I now use Alibaba Cloud mainly for the CN2 boutique network of Alibaba Cloud Hong Kong, which is used for cross-border data (the price is 3 yuan/GB). There are currently no alternatives in this regard. I have also considered the dedicated line, but it is too expensive to afford.
However, I will not put my core business on Alibaba Cloud. I feel more at ease on AWS. The Alibaba Cloud Hong Kong downtime only affected my website for a short while, and it was precisely because I did not put my core business on Alibaba Cloud.
Very simple, this site uses servers from four different service providers in four regions:
And this site is a purely static site. After the site is built, the build products will be synchronized to the four servers respectively, and there is no concept of a master server. This site uses GeoDNS to resolve visitors to the nearest server to reduce latency. After the Alibaba Cloud Hong Kong server goes down, you only need to stop responding and parsing. Originally, the Asia-Pacific region was resolved to Hong Kong, but now Asia is resolved to Beijing, Australia is resolved to the United States, and these users are diverted to available servers.
This site also uses the Health Check function of Route 53, which can monitor the running status of the host in near real time, and automatically stop responding and analyzing when the running status is found to be abnormal. Currently, my site detects and recognizes the exception within a minute of it happening, and stops responding to parsing. The time of relevant DNS records is set at 120 seconds, so when the service is unavailable, the user will not be affected for more than 3 minutes.
]]>Recently, I used the National Bank and the US version of the iPhone to compare the intermediate frequency and millimeter wave 5G. All tested on the same carrier (Verizon Prepaid) on the same plan (Unlimited Plus), using a physical SIM, in the exact same geographic location.
It can be seen that millimeter wave 5G (high frequency, mmWave) easily ran to 2000Mbps. In theory, millimeter wave can reach 3000Mbps, but I have tried many times and the highest “just” 2000Mbps.
The mid-band 5G (Mid-Band) “only” ran to 929Mbps.
According to this test, millimeter wave 5G is about 2 times faster than intermediate frequency 5G. In their respective ideal cases, mmWave 5G can be 2-3 times faster than mid-band 5G.
Enter *3001#12345#*
on the call page of the system, then click to call:
Then we can see the Field Test Mode as shown below:
Then click More on the top right and select Nr ConnectionStats in 5G. If you can’t see 5G, it means there is no 5G signal currently.
Then look at the numbers in Band:
If the number is less than 100 (as shown above), mmWave is not used. If the display is greater than 100 (common 257-262), it means that you have connected to mmWave 5G. The specific frequencies used can refer to this table.
Not all 5G-enabled iPhones support mmWave 5G. Currently only the iPhone 12 and 13 series purchased in the U.S. can use mmWave 5G networks in the U.S.
According to [Apple’s official website] (https://support.apple.com/zh-cn/HT211828), 5G has various icons. If only 5G is displayed, it is connected to the most common 5G, and the speed is relatively slow. If you see 5G+, 5G UW, and 5G UC, you may be connected to mmWave 5G, which is faster. But in reality, showing 5G+, 5G UW, and 5G UC does not imply the use of mmWave 5G (and possibly only mid-band 5G). Also, in countries other than the US, even if connected to mid-band 5G, only 5G is shown.
The low frequency range is within 1 GHz, the mid frequency is 1-6 GHz, and the millimeter wave is 24-40 GHz.
Low-frequency 5G, also known as 5G Nationwide (Verizon), Extended Range 5G (T-Mobile), 5G (AT&T). It is the most widely covered 5G, but the speed is not very ideal, and sometimes it is not even as fast as 4G/LTE. Many existing 4G base stations can be easily upgraded to low-frequency 5G. In my opinion, it’s just a quasi-5G network.
Mid-band and mmWave 5G, also known as 5G Ultra Wideband (Verizon), Ultra Capacity 5G (T-Mobile), 5G+ (AT&T). It is a real 5G network.
Currently in China, all carriers use mid-band for 5G. Compared with millimeter wave, IF has wider coverage under the same number of base stations. This is because the intermediate frequency has a longer wavelength and is less likely to be blocked by obstacles than millimeter waves during propagation.
LTE-Advance, also known as 5G Evolution (AT&T). Refers to 4G networks that use technologies such as carrier aggregation, 4x4 MIMO, and 256 QAM. This kind of network is not a 5G network at all, just a faster 4G network.
Search your phone model on Apple’s official website (for example, A2629, A2634, A2639, A2644 are the models of the iPhone 13 series in mainland China, Hong Kong and Macau) ), then see if any of the bands in n257-262 are supported. As of now, only the iPhone 12/13 sold in the US supports mmWave. You can see the model number on the back of the fuselage. The models that currently support mmWave are:
There’s an easier way: look for an opening for a millimeter-wave antenna on the right side of the iPhone (Image credit Apple)
This mmWave antenna opening is very similar to the wireless charging opening for the Apple Pencil in the iPad series, but they are really not the same thing, don’t get confused.
Similarly, you can search for your iPad model on Apple’s official website. You can see the model number on the back of the fuselage. The models that currently support mmWave are:
It’s important to note that this keyboard uses a dedicated protocol to communicate with the iPad, Bluetooth is not supported. That is, only for iPad Pro use.
Weight: In terms of weight, this keyboard is much heavier than the Magic Keyboard for Mac plus Magic Trackpad for Mac. Therefore, the portability is better than the keyboard and touchpad equipped with the Mac. The 11-inch keyboard weighs around 600g. For reference, the 11-inch iPad Pro weighs 471g; the 11-inch Smart Keyboard Folio weighs about 300g; the Magic Keyboard for Mac plus Magic Trackpad for Mac weighs 231g + 231g = 462g, not to mention the latter than The iPad’s touchpad is much larger and supports Haptic Feedback. Installing this keyboard would more than double the weight of the device, which is a bit of an exaggeration. The main problem is the problem of double-sided protection. If the protection on the back can be omitted, the weight should be much lighter.
Category | Model | Weight |
---|---|---|
iPad 11-inch | Magic Keyboard | 593g |
Body | 471g | |
Smart Keyboard Folio | 297g | |
Body + Magic Keyboard | 1.06kg | |
iPad 12.9-inch | Magic Keyboard | 699g |
Body | 641g | |
Smart Keyboard Folio | 407g | |
Body + Magic Keyboard | 1.34kg | |
Mac | Magic Keyboard | 231g |
Magic Trackpad | 231g | |
MacBook | 13-inch Air | 1.29kg |
13-inch Pro | 1.37kg | |
16-inch Pro | 2.0kg |
* The weights of the Magic Keyboard for iPad Pro and Smart Keyboard Folio are not official data
My touchpad doesn’t have the three-finger multitasking gesture enabled by default. You need to go to Settings, Home Screen and Dock, and Multitasking to find the “Gesture” switch to turn on. After opening, four fingers on the screen will also start multitasking.
If you turn on pointer animation, when you move over the button, the pointer will disappear, replaced by the adsorption effect of the button highlight. This feature can be turned off, which is closer to the experience on macOS.
Similarly, the default iPadOS will turn on touchpad inertia, and turning this feature off will be closer to the experience on macOS.
Similar to the Mac, the iPad has also introduced a secondary click function, which can replace Haptic Touch and is very easy to use.
Two-finger press secondary click
According to my personal experience and online reports, the touchpad is significantly easier to use than third-party ones, and the effect is comparable to the Magic Trackpad for Mac. It seems that Apple currently has a bug (or possibly intentional) with third-party touchpads for multi-touch and the smoothness of moving the cursor.
The keyboard uses the latest scissor-style structure, which is between the feel of the latest 16-inch MacBook Pro and Magic Keyboard for Mac. Much better typing experience than the Smart Keyboard Folio.
Although the touchpad does not support Haptic Feedback, the entire touchpad can be physically pressed, and it is hard to say that there is no corner.
There is enough damping at the two angle controls, and no matter what angle it is placed, the iPad’s own weight cannot make it turn on its own. Can be used on the legs.
The overall experience is acceptable, the only disadvantage is that it is too heavy and both sides are thick (the back is much larger than ordinary double-sided clips because it supports charging and angle adjustment). In addition, it seems that the material of the shell this time has not changed from the previous generation, and it will be seriously deteriorated after a long time of use.
You can buy it at an Apple Store or online. Education discount can be used for ¥200 cheaper, both students and teachers. Pay attention to the keyboard style when purchasing, I purchased American English. Domestic physical stores may only have Simplified Pinyin version.
]]>By using PowerDNS, you can build a DNS server that supports partition resolution (fine to country, city, ASN, custom IP segment), EDNS-Client-Subnet, DNSSEC, IPv6 on your own server.
This article is for domain name owners and webmasters, and is about authoritative DNS servers, not DNS caching servers. If you want to know more about DNS, please refer to Detailed Explanation of DNS Domain Name Resolution System - Basics and Introduction to DNSSEC, How Signatures Work.
This article is an update to self-built PowerDNS intelligent resolution server. The advantages and disadvantages of self-built DNS, configuring DNSSEC, and distributed DNS services are described in the previous article, which will be omitted in this article. This article mainly updates the following content:
Considering the different default versions of operating system software sources, it is recommended to go to PowerDNS repositories reconfigure the software sources for PowerDNS Authoritative Server (please Add 4.2 or above). You can also configure dnsdist’s repositories (please add version 1.2 or above).
After adding the repositories, update the software list and install pdns-server
and pdns-backend-geoip
. Under Debian/Ubuntu:
sudo apt-get install pdns-server pdns-backend-geoip
It should be noted that the systemd-resolved
service may be included by default after Ubuntu 18.04 LTS. This service occupies port 53 of the loopback address, which conflicts with the port (0.0.0.0:53
) required by PowerDNS. . You can: 1. Disable this service, 2. Modify PowerDNS’s local-address and [local-ipv6](https: //doc.powerdns.com/md/authoritative/settings/#local-ipv6) Configure to listen to external network IP addresses only, 3. Modify [local-port](https://doc.powerdns.com/md/authoritative/ settings/#local-port) is a non-53 port and forwarded using dnsdist (see below for details).
If you want to configure the user-side IP access rate, install dnsdist:
sudo apt-get install dnsdist
If you choose to disable systemd-resolved
, you can execute the following code:
sudo systemctl disable systemd-resolved.servicesudo systemctl stop systemd-resolved
To implement partition resolution, we need the GeoIP database. The latest version of PowerDNS GeoIP Backend (4.2.0) already supports the mmdb format of GeoIP2, and also supports the dat format. The boss’s database only supports dat format. Since MaxMind has discontinued maintaining a free dat format database, it is highly recommended to use PowerDNS version 4.2 and above and use mmdb format instead IP database**.
First create a configuration file, create /etc/GeoIP.conf
with the following content (including IPv4 and IPv6 country, city, and ASN data):
ProductIds GeoLite2-Country GeoLite2-City GeoLite2-ASNDatabaseDirectory /usr/share/GeoIP
It is recommended to install geoipupdate, which can be installed in Ubuntu as follows:
sudo add-apt-repository ppa:maxmind/ppasudo apt updatesudo apt install geoipupdate
Then download/update GeoIP2:
sudo geoipupdate -v
All done!
It is also recommended to configure Crontab to periodically execute the above command to update the database file (and reload the PowerDNS service) to update the GeoIP database.
The configuration file of PowerDNS (hereinafter referred to as pdns) is located in /etc/powerdns
. First delete the original demo configuration of PowerDNS, and then create a folder to store the DNSSEC key file:
sudo rm /etc/powerdns/pdns.d/*sudo mkdir /etc/powerdns/keys
Then create the file, create /etc/powerdns/pdns.d/geoip.conf
with the following content:
launch=geoipgeoip-database-files=/usr/share/GeoIP/GeoLite2-Country.mmdb /usr/share/GeoIP/GeoLite2-City.mmdb /usr/share/GeoIP/GeoLite2-ASN.mmdbgeoip-zones-file=/etc/powerdns/zones.yamlgeoip-dnssec-keydir=/etc/powerdns/key
In the geoip-database
option, you can set up one or more GeoIP databases as needed.
You can create the file /etc/powerdns/zones.yaml
to configure DNS records. For the specific format, see Zonefile format.
It should be noted that if mmdb is used, %re
is the two-digit regional abbreviation for ISO3166. If using dat, it is the region code for GeoIP.
Since PowerDNS GeoIP Backend only supports partition resolution by domain name configuration, and does not support partition resolution by record configuration, so to configure root domain name partition resolution, you can refer to the following configuration (YAML format, the same below):
domains:- domain: example.com ttl: 300 records: unknown.cn.example.com: - soa: &soa ns1.example.com hostmaster.example.com 2014090125 7200 3600 1209600 3600 - ns: &ns1 content: ns1.example.com ttl: 600 - ns: &ns2 ns2.example.com - mx: &mx 10 mx.example.com - a: 10.1.1.1 bj.cn.example.com: - soa: *soa - ns: *ns1 -ns: *ns2 - mx: *mx - a: 10.1.2.1 unknown.unknown.example.com: - soa: *soa - ns: *ns1 -ns: *ns2 - mx: *mx - a: 10.1.3.1 ns1.example.com: - a: 10.0.1.1 - aaaa: ::1 ns2.example.com: - a: 10.0.2.1 services: example.com: [ '%ci.%cc.service.geo.example.com', 'unknown.%cc.service.geo.example.com', 'unknown.unknown.service.geo.example.com']
As you can see, in order to configure sub-regional resolution of the domain name, we need to configure all types of records for each region. The root domain name usually has many record types, so it is relatively cumbersome to configure. The above YAML writing uses variables, which can reduce the repeated records of configuration. (&variable
sets variables, *variable
uses variables)
You can debug by configuring logging as follows:
debug.tlo.xyz: - a: "%ip4" - aaaa: "%ip6" - txt: "co: %co; cc: %cc; cn: %cn; af: %af; re: %re; na: %na; as: %as; ci: %ci; ip: %ip"
Tested on the client through a command similar to the following (replace 10.11.12.13
with the IP address the service is listening on), and the following results are obtained:
$ dig @10.11.12.13 debug.example.com debug.example.com. 3600 IN TXT "co: us; cc: us; cn: na; af: v4; re: ca; na: quadranet enterprises llc; as: 8100; ci: los angeles; ip: 10.11.12.13 "debug.example.com.3600 IN A 10.11.12.13
This way you can see the specific value of each variable.
In PowerDNS 4.2, Lua logging feature. The main function of its Lua record is that it can be configured to automatically switch records when downtime.
Since Lua records can use the Lua programming language to execute arbitrary code, which is relatively dangerous, PowerDNS disables this function by default and needs to be enabled in the configuration file:
enable-lua-records=yes
Configure the following records, which will return dynamically available A records. If the ports of both IPs are available (here is port 443), an IP address will be returned randomly, if one is unavailable, only the address of the available IP will be returned, otherwise both IPs will be returned at the same time:
sub.example.com: - lua: A "ifportup(443, {'192.0.2.1', '192.0.2.2'})"
Configure the following records to automatically check the status of the specified URL. By default, the first group of IP addresses will be returned. If the first group of IP addresses is unavailable, the second group will be returned:
sub.example.com: - lua: A "ifurlup('https://sub.example.com/', {{'192.0.2.1', '192.0.2.2'}, {'198.51.100.1'}})"
Lua records can be used with GeoIP functionality and can coexist with other types of records. In addition, the status check is not synchronized with the request, but periodically checks whether it is available in the background, so using the status check will not increase the delay of DNS requests.
By now, your DNS server should be working. Configuring dnsdist is described below.
dnsdist is equivalent to adding a layer of proxy on top of pdns, which can be configured to limit the access rate and play an anti-DOS role. Under normal circumstances, after the installation of dnsdist is completed, its service will start automatically.
In this example, change the main program listener of pdns to 127.0.0.1:8053
. Create or modify the file /etc/dnsdist/dnsdist.conf
with the following content:
newServer{address="127.0.0.1:8053",useClientSubnet=true}setLocal("10.0.1.1")setACL({'0.0.0.0/0', '::/0'})setECSSourcePrefixV4(32)setECSSourcePrefixV6(128)addAction(MaxQSIPRule(10, 32, 56), TCAction())addAction(MaxQSIPRule(20, 32, 56), DropAction())addAction(MaxQSIPRule(40, 24, 48), TCAction())addAction(MaxQSIPRule(80, 24, 48), DropAction())addAction(MaxQSIPRule(160, 16, 40), TCAction())addAction(MaxQSIPRule(320, 16, 40), DropAction())
The newServer
statement sets the server proxied by dnsdist, which is the IP address and port number on which pdns listens. setLocal
sets the listening ip address of dnsdist. setACL
sets the allowed IP addresses, here all IPv6 and IPv4 access is allowed. setECSSourcePrefixV4
and setECSSourcePrefixV6
set the number of bits of the IP address transmitted by the EDNS Client Subnet Client function. Here, 32 bits are set for IPv4, and 128 bits are set for IPv6, that is, the full length of the IP address is reserved. A smaller value than this can also be set in actual production.
The last few lines addAction
and MaxQPSIPRule
define the access rate limit. Where MaxQSIPRule
has three parameters. The first parameter is qps, which is queries per second, the number of requests per second. The second parameter is the CIDR value for IPv4, which defaults to 32, and the third parameter is the CIDR value for IPv6, which defaults to 64. TCAction
represents asking the client to use a TCP request, and DropAction
represents rejecting the request. When an ordinary client receives a request from the DNS server to use TCP, it will use TCP to make a DNS request, which does not affect the use of ordinary users. DOS generally does not attack the TCP DNS service.
For example, the following statement means:
addAction(MaxQSIPRule(40, 24, 48), TCAction())
Limits the qps of /24 IPv4 addresses (256) and /48 IPv6 addresses (2^80) to 40, if it exceeds, the client is required to use TCP requests.
In addition to TCAction and DropAction, DelayAction (an integer parameter representing the number of milliseconds to delay) and NoRecurseAction (specifically to limit requests marked with recursion ((RD))) can also be used.
Therefore, using dnsdist with pdns can make the DNS service have a certain anti-DOS ability.
]]>TL;DR: Solid-state drives (SSDs) have about 10 times higher performance than mechanical hard disks (HDDs), and are significantly smaller in volume and weight than HDDs, but the price per unit capacity is also nearly 10 times higher.
Solid State Drives (SSD) have become a relatively popular concept. Generally, the performance of solid-state drives is better than that of traditional mechanical hard drives (HDDs), including but not limited to read and write speed, IOPS and other indicators. The read and write speed of an ordinary mechanical hard disk is about 100Mbyte/s, while the solid-state hard disk can reach more than 1000Mbyte/s, which is nearly 10 times faster than a mobile hard disk. Compared with HDD, the IOPS performance of SSD can be improved by nearly 100 times. These performance improvements are reflected in various aspects: copying files, opening documents and software software stored on the hard disk, starting the system, etc., up to a dozen times faster (if the bottleneck is reading and writing on the hard disk). Of course, the price of SSD can also be nearly 10 times higher than that of HDD. In addition, HDDs are much louder than SSDs, which are basically silent.
Most of the mobile hard drives sold on the market are low-speed HDDs, because ordinary consumers are most concerned about capacity and price. The price per GB of HDDs is lower, while SSDs are much more expensive, so the sales of SSDs are relatively much lower. For 300 you can get a 1TB HDD, while a 1TB SSD might cost 3000.
**Do I need an SSD? **Depends on whether there is a high performance requirement. For example, the operating system and software are suitable to be installed on the SSD, which will greatly reduce the boot time. Some data such as documents, pictures, and videos may not require an SSD as much. If you are an image/video worker and need to store high-quality images/videos, then SSD will be very useful, it can greatly reduce the loading time of files; of course, many software are also optimized for low-performance devices, such as Lightroom Classic CC can generate previews to reduce file size; Final Cut Pro and Premiere CC can generate proxies for original footage to reduce file size, you can store previews or proxies on SSD and original files on HDD cut costs. In addition, some infrequently accessed data can also be stored on the HDD to save costs, such as data backup, surveillance video, logs, etc.
In addition, SSD can have lighter weight and smaller volume, so that SSD can be made into the size of U disk. The HDD will be much larger. Take the smallest 2.5-inch HDD that is commonly used today as an example, its length is 10cm and the width is 7cm, not including the casing part. The length, width and height of the SSD shown in the picture above are only 3.0cm * 2.7cm * 0.4cm, and the weight is less than 40g. It is less than one-third the length of an HDD.
SSDs can also be very large, for example, the 15-inch MacBook Pro can even choose an SSD with a capacity of up to 4TB. But because SSDs are too expensive, large-capacity SSDs are not popular.
SSD peripherals have such high read and write speeds that only 5Gbps USB3.0 and USB3.1 Gen1, or 10Gbps USB3.1 Gen2 can be used to reflect its performance, and the speed of these interfaces is enough for SATA SSDs . However, to achieve its full performance, a 20Gbps Thunderbolt 2 or 40Gbps Thunderbolt 3 interface is required, as most NVMe SSDs have read and write performance greater than 10Gbps but not more than 20Gbps. It’s almost pointless to use a 480Mbps USB2.0 SSD, so you’ll be hard-pressed to find such a product. The prices of SSD peripherals with different interfaces are also clearly differentiated. The higher the interface used, the more expensive it is. A mobile SSD with a Thunderbolt port will be 23 times more expensive (24 times faster) than a USB port.
The shape of the Thunderbolt3 interface is exactly the same as that of the USB Type-C. Some mobile hard drives use the Type-C interface, but it does not mean that it is Thunderbolt3. You may need to see if it supports Thunderbolt by looking for the Thunderbolt ⚡️ logo. In addition, mobile hard disks using Thunderbolt3 interface may not support USB protocol. For example, the HP P800 mobile hard disk I purchased does not support USB protocol, and cannot be converted to USB, nor can it be converted to Thunderbolt2, so its compatibility is greatly reduced. . USB3.x devices are usually compatible with USB2.0, and USB Type-A and Type-C can also be transferred to each other, so USB devices have better compatibility.
SSD mobile hard disk is similar to ordinary mobile hard disk (based on HDD), but has higher performance. Generally, SSD mobile hard drives are also smaller in size.
SSD U-disk is equivalent to a smaller SSD mobile hard disk, and its volume may be similar to an ordinary U disk. Due to the reduction in size, the read/write performance and heat dissipation performance of SSD U disks are generally inferior to SSD mobile hard disks. And even fewer USB sticks that support Thunderbolt.
HP P800 SSD | CHIPFANCIER SSD | |
---|---|---|
Interface | Thunderbolt3 | USB3.1 Gen2 Type-C |
Type | Mobile Hard Disk | U Disk |
Size (mm) | 141x72x19 | 72x18x9 |
Read Speed | 2400Mbyte/s | |
About 500Mbyte/s | ||
Write Speed | 1200Mbyte/s | About 450Mbyte/s |
Capacity | 256GB/512GB/1TB/2TB | |
(2TB version not released yet) | 128GB/256GB/512GB/1TB |
Usually the hard disk needs to be formatted before use. When formatting a hard drive, you need to select the disk format. Compare four formats here: APFS, NTFS, exFAT, and FAT32. (Some software and documents can only be installed or stored in a specific disk format, so they are no longer listed in the table)
APFS | NTFS (v3.1) | exFAT | FAT32 | |
---|---|---|---|---|
Year of release | 2017 | 2001 | 2006 | 1977 |
Applicable media | Recommended SSD | |||
Unlimited | Unlimited | Unlimited | ||
Windows Compatible | – | Support | 7+**** | |
Support | ||||
macOS Compatible | 10.12+ | |||
(Sierra) | Support* | Support | Support | |
Linux Compatible | – | Supported* | – | Supported |
Mobile Compatible | – | – | Support | Support |
Snapshots | Snapshots | Shadow Copy** | – | – |
Encryption | FileVault | BitLocker** | – | – |
Copy-on-Write | Support | – | – | – |
Space Sharing | Support | – | – | – |
Maximum file size*** | ∞ | ∞ | ∞ | 4GB |
Maximum partition size*** | ∞ | ∞ | ∞ | 32GB |
About operating system compatibility, if the operating system is supported by most mainstream versions, it is counted as “supported”. Mobile Compatibility “Supported” as long as it is compatible with both iOS and Android.
About iOS, currently the iOS system only supports the import of pictures and videos from external storage, and the pictures and videos in external storage must be named according to the camera file structure; although the built-in storage of iOS uses APFS, it does not support External storage for APFS.
The disk format has little effect on the performance of the hard disk, mainly depends on its compatibility and function. But good compatibility has few functions, and many functions have poor compatibility.
By using SSD peripherals, you can expand your computer’s storage while maintaining high performance. However, you can even install your computer’s operating system on a USB stick so that you can boot with the target operating system without partitioning. If you install the operating system on the HDD mobile hard disk, its booting speed will be very slow; if you install it on the ordinary U disk, the booting speed will be almost unbearable. Therefore it is recommended to install the operating system on the SSD peripheral.
macOS, Windows and Linux can all be installed on a USB stick or removable hard drive. But Windows is the best compatible with all kinds of hardware. If you need to use it on a PC, then only the Windows system is recommended. Windows 10 Enterprise supports the Windows To Go feature, which is optimized for installation on a USB drive or removable hard drive.
To install Windows To Go (WTG), you need a Windows environment, which can be a PC with Windows installed, a Mac using BootCamp, or a virtual machine running Windows. If you happen to be using Windows 10 Enterprise, then you can use the WTG tool in the system control panel (however, I failed to install it with the tool that comes with the system). Otherwise you can only install WTG using third-party tools. This WTG helper tool is recommended here.
After the installation is successful, you can choose to boot from the U disk when the computer starts, or boot from the U disk in the virtual machine. Here’s how to start it on a Mac and launch in WTG and VMware Fusion on macOS.
The above video is booting directly into WTG on a Mac. If your Mac is equipped with a T2 security chip, then you need to turn off boot security checks (no security, allow booting from external media) to be able to Start Windows from an external device. You might consider turning on a firmware password for continued security (Macs without a T2 chip can also turn on a firmware password). Tip: Please remember the firmware password. Once the firmware password is forgotten, it can only be returned to the factory for repair.
A freshly installed Windows may not have drivers for your Mac’s mouse and touchpad. You’ll need a USB mouse and touchpad to complete the setup (Magic Keyboard 2 and Magic Trackpad 2 also work as USB peripherals when wired).
You need to install the Mac driver for Windows. The Windows Support Software (drivers) can be manually “downloaded” from the Actions menu in the Mac’s Boot Camp software. Note that the drivers of different models of Macs are different, it is recommended to use the system software download. If you need to use a WTG on multiple Macs, you will need to install multiple drivers. Boot Camp Assistant is to help users install Windows dual system on Mac local hard disk, cannot be used to install Windows To Go, here just use it to install drivers.
It is recommended to save the driver on another USB disk recognized by Windows (see “Selecting the Disk Format” in this article). Then boot into WTG again and install it.
If you have a virtual machine, you can also boot from a USB drive in the virtual machine.
If BitLocker is not enabled, all user documents on the USB flash drive are stored in plaintext, which means that as long as you have your USB flash drive, all user data can be read. If BitLocker is enabled, disk data security is guaranteed. However, Mac has no related hardware support for BitLocker, so BitLocker in WTG cannot be started by default, and the configuration file under Windows needs to be modified.
Enabling BitLocker will slightly increase the system’s CPU usage and slightly reduce disk performance. In addition, the disk with BitLocker enabled is not recognized by macOS, which means that the Windows partition with BitLocker enabled cannot be read on the macOS system. It is recommended to turn it on with caution.
Installing Windows to a USB drive not only saves disk space on your computer, but it also has an operating system that you can take with you anywhere and can be used on Macs and PCs. I don’t have BitLocker turned on. I have enabled NTFS read and write on macOS, so its Windows partition can also be used as a U disk to share data.
I have installed some common software on WTG. On other PCs, I don’t necessarily need to enter the WTG system to use these software, but directly enter the Program Files folder in the WTG root directory to access these software.
I installed Windows on the SSD U disk from CHIPFANCIER, which features a USB3.1 Gen2 Type-C interface, so it can be used on a new Mac without an adapter. Then I purchased a Type-C to Type-A adapter (USB 3.0) from Greenlink, which can transfer this USB flash drive to a Type-A device.
]]>This time 5.0 is a major version update, unlike the previous 4.9 and 4.8 updates, which can be seen from the version number.
The first impression of this new block editor (Gutenberg Editor) is that it is more concise, this new editor has more white space. When you use the editor for the first time, you will feel a little uncomfortable, because the new editor does not have the familiar toolbar. In its place is a simple Add Block button and some other basic operations.
In the new version of the editor, every paragraph, image, subtitle, quote, etc. is a “block”. You can do custom actions for each block, such as setting a different font size, font color, and even background color and custom CSS for each natural segment. And all of this is visualized through the “Components” tool on the right.
You can drag blocks directly to achieve effects such as dragging natural segments.
The new editor is compatible with the boss editor. To use the old editing mode, you can do this by inserting one or more Classic Blocks. This classic block is on par with other blocks in the new editor, such as natural paragraph blocks, subtitle blocks, etc. Classic blocks can contain one or more natural paragraphs, subheadings, and for simple typography, it can replace the latest block editor. When using classic blocks, you can still see the familiar toolbar.
Classic Block Screenshot
When editing previously published articles and pages, the original editing mode is still used by default, that is, a classic block is included in the new editor.
The original blocks are all corresponding to HTML tags and have relatively few functions. Classic blocks are used for compatibility with previous generation editors. To use the new features, you need to learn to use these new blocks.
<p>
<h1>
~ <h6>
heading.<ul>
or <ol>
list.<!--more-->
.<img>
.<blockquote>
. The new version of the application block supports citations (Citation).video
Shortcode in the previous version of the WordPress editor.<hr>
.<pre>
.<pre>
and <code>
combination.Many new blocks have been added in the new version. By using these blocks, you can directly insert some content you want visually without editing the source code or using plugins.
<pre>
. It can realize the monospace display of English characters.The following could have been added as Widgets in the web menu. It is now also possible to add directly in the article as a block.
In addition, there is now an Embed function, which can directly embed third-party content, such as Twitter, YouTube, SoundCloud, etc.
In order to better match all the typographic features of the new editor, WordPress also launched a new 2019 default theme. Details
Just like the default themes in the past, the new WordPress themes are also very versatile. However I feel that the new 2019 theme is not as clean as the previous 2017 theme (the one this blog is using).
WordPress does not guarantee continued security updates for older versions, so you should upgrade to the latest 5.0 version. However, in fact, as can be seen from the version history of WordPress, WordPress is still maintaining 3.7 (released on 2013-10-24) and all subsequent versions. So even if you don’t update to version 5.0, you may continue to receive security updates.
It should be noted that every time you update a WordPress version, especially a major version update, some functions in the source code will change, which means that there is no guarantee that your plugin will work correctly in the new WordPress. You should check that the plugins you have enabled work correctly on the latest version, or that an update for the new version has been released. If you’re a plugin developer, you should have a site with a beta version of WordPress installed, and it’s long overdue for your plugin to work with the latest version.
Be sure to backup your site before upgrading. For WordPress, you need to back up the WordPress code directory and database contents.
]]>This article includes a comprehensive comparison of Azure DNS, NS1, Constellix.
The three DNS recommended this time are all international service providers that support GeoDNS, and all support Anycast. Among them, the speed of Azure DNS and NS1 is very good in China. It is a direct route to Hong Kong/Asia. The average delay can be within 50ms, which is not inferior to domestic DNS service providers. The international speeds of these three companies are extremely fast, and they can kill domestic providers such as DNSPod, CloudXNS, and Alibaba Cloud Analytics.
Regarding GeoDNS, Constellix has the best granularity: it supports countries, provinces, cities (including Chinese provinces and cities), and even AS numbers (which can be resolved by operator) and IP segments. NS1 supports countries and states in the United States, as well as AS numbers and IP segments. As for Azure DNS, by using Traffic Manager, countries, states in some countries, and IP segments can be supported. Since all IP segments are supported, GeoDNS with arbitrary precision is theoretically supported.
As for the price, NS1 provides a free quota (500k requests per month), which is enough for low-traffic websites, but once this free quota is exceeded, the charges are very high: $8/million requests . In comparison, both Azure DNS and Constellix are cheap, with Constellix starting at $5/month.
It is possible to use multiple DNS service providers at the same time, as long as the DNS records of the service providers remain the same. This requires the service provider used to be able to configure the NS record for the primary domain name (recommended but not required). The Azure DNS, NS1, Constellix introduced this time, as well as the previously introduced Route 53, Google Cloud DNS, Rage4, and Alibaba Cloud DNS can configure the NS record of the primary domain name (where Alibaba Cloud DNS and Azure DNS can only add the first 3rd party records; several others can fully customize NS records). However, Cloudflare and domestic DNSPod, CloudXNS, and Alibaba Cloud DNS cannot configure NS records under the primary domain name, which means that you cannot mix DNS well.
To use multiple DNS, there are two implementation methods: the master and slave method and the two master method. After completing the configuration, configure multiple NS servers to authoritative NS records and NS lists under the domain name registrar.
Using two service providers will increase the difficulty of configuration, but can improve the stability of the DNS service. such as the recent DNSPod outage and Dyn outage in 2016 causes many websites to become inaccessible because many websites use only one DNS provider. If multiple DNS providers are used, the website will only become inaccessible if all of the providers used are down, which is obviously very unlikely.
To configure this way, the secondary provider needs to support fetching records from the primary provider using AXFR, and the primary provider also needs to support AXFR transfers. Among them, both NS1 and Constellix can be used as the main service provider or the secondary service provider, which means that you can use these two at the same time, and choose any one as the primary service provider and the other as the secondary service provider.
You can also use two primary service providers and keep all records (including NS records under the primary domain) the same (recommended, but not required). It is recommended to synchronize the SOA’s serial number as well.
The domain name github.com.
uses both Route 53 and DYN services. This can be verified using the dig tool.
$ dig github.com ns +shortns1.p16.dynect.net.ns2.p16.dynect.net.ns3.p16.dynect.net.ns4.p16.dynect.net.ns-1283.awsdns-32.org.ns-421.awsdns-52.com.ns-1707.awsdns-21.co.uk.ns-520.awsdns-01.net.
After inspection, the same records are also configured in the two DNS service providers.
$ dig @ns1.p16.dynect.net.github.com a +short13.229.188.5913.250.177.22352.74.223.119$ dig @ns-1283.awsdns-32.org.github.com a +short13.229.188.5913.250.177.22352.74.223.119
Let’s take a look at these three DNS providers one by one.
List of all DNS evaluations (also includes CloudXNS, Route 53, Cloudflare, Google Cloud DNS, Rage4, and Alibaba Cloud DNS)
DNS service under the Microsoft Azure product line. It uses Anycast technology and can be directly connected to Hong Kong/Asia nodes in China, so the speed is very fast.
It is worth noting that, similar to Route 53, the four servers allocated by Azure use different network segments and different circuits, which may have higher availability.
Note: The partition resolution of Azure’s DNS may not be compatible with IPv6, which means that the resolution result may be Fallback to the default line.
NS1 also uses Anycast technology, and can be directly connected to Hong Kong/Asia nodes in China, so the speed is very fast.
This DNS, known for its GeoDNS advantages, uses Anycast to guarantee the lowest latency.
Constellix has three groups of six DNS servers, each using a slightly different line.
For new users, be aware of the two correct ways to get Mac software: Download from the App Store/Download over the Internet. It is safest to download software from the App Store, because all software listed on the App Store has been reviewed by Apple; be careful with software downloaded from the Internet, as it may be malicious software, see the “Mac System Security” section of this article for details. .
US$10/month for two Macs or $15/month for two Macs + five sub-accounts. The combined purchase of the family version is about $26 per year. Setapp is the equivalent of another Mac App Store. The difference is that Setapp is a subscription system. Compared with buying out the software, you only need to subscribe to Setapp for a fixed monthly fee, and then you can download all the software that cooperates with Setapp for free, and enjoy subsequent updates. All require a paid buyout (or additional subscription/in-app purchase). Setapp is priced at US$10/month and can be used on two Macs, and additional Macs are $5/month each. The recently released Home Edition costs only $15/month and can add five additional sub-accounts (sub-accounts are limited to one Mac). Therefore, if you form a group to buy the family version, it will be very affordable, and the price will be around $26 per year. Many of the software described in this article are included in Setapp. If you have subscribed to Setapp, you do not need to purchase additional software. These software are already marked.
Paid software, available for download via Setapp This software lets you customize your Mac touchpad and mouse gestures to take full advantage of multi-touch capabilities. In addition, by using its built-in Window snapping function, you can achieve a Windows-like “move window to the edge of the screen to quickly adjust the window size”. I use BetterTouchTool mainly for its Window snapping function.
Freeware Content Blocker. By using this browser plug-in, you can block annoying advertisements on web pages and give you a clean browsing experience. At the same time, it can also achieve the effect of power saving.
Freeware Unzip software. It can decompress compressed formats such as RAR that are not supported by the Mac system.
Paid software, available via Setapp Optionally hides the Mac menu bar icon. When more and more third-party software is installed, the menu bar will become more and more full. By using Bartender, some menu bar icons can be hidden/collapsed. It can also be configured to automatically display when the menu icon is updated.
Free + in-app purchase software, you can download and unlock in-app purchases through Setapp You can automatically generate various styles of icons from pictures or text, and apply them to files/folders/disks. Can help you generate personalized, beautiful file/folder/disk icon.
Paid software, $39.99 buyout Carbon Copy Cloner (CCC) is a full-featured backup management software. Compared to the Time Machine that comes with the Mac, it can back up external disks, select directory backups, back up the system to an APFS formatted hard drive, and create a bootable external disk. This software fully supports the Snapshot function under APFS, and has a visual interface to manage these snapshots (supports operations such as mount, restore, delete, etc.). Personal suggestion: For scenarios where Time Machine can be applied, use Time Machine first, otherwise use CCC.
Paid software, you can download it through Setapp CleanMyMac can help users clean up system junk files, uninstall software, manage startup items, and implement basic system monitoring.
Personal suggestion: Unless the system space is insufficient, otherwise do not clean the system junk frequently, especially user cache files and system cache files, cleaning them may cause the program to run slower.
Paid software, available via Setapp It finds duplicate or similar items on your disk and optionally deletes those files to free up more disk space on your Mac.
Free + in-app purchase software, can download and unlock in-app purchases through Setapp Disk recovery software on Mac. If you delete important files by mistake, and you don’t have any backups, you can try to recover your files with this software. It is highly recommended to make a backup of your files so that you don’t need to use this type of software. However, it is still good to use this software for emergency rescue in critical situations.
Subscription software, you can unlock the subscription through Setapp In addition to the system’s own memo, Pages, and Microsoft Word, here is an additional recommendation Ulysses, Markdown-based plain text editor. The feature is simplicity and ease of use.
Paid software, can be downloaded through Setapp It can replace iTunes to complete backup/restore backup/reinstall system/upgrade system, and has more functions than iTunes. For example, it can access specific data in backups, including software archives for each program. You can also manage applications, install the previous version of the software or the software that has been removed from the shelf through the .ipa file, manage mobile phone ringtones, etc. Of course, it is more recommended to use Apple Configurator 2 to manage iOS apps on Mac.
Paid software, can be subscribed or bought out Compared with the password management function that comes with browsers (such as Safari, Chrome), 1Password can support more browsers and is cross-platform. It can choose the formula for password generation, and can automatically check insecure passwords, leaked passwords, etc. through Watchtower, and can also save more types of information, such as credit cards, ID cards, driver’s licenses, databases, wireless routers information, etc. By using 1Password, you can get into the habit of setting a different, secure, random password for each website to keep your internet accounts safe.
In addition to Apple’s Xcode, here are some development tools for your reference:
Freeware The interface is simple and beautiful, supports video playback in many formats, and is compatible with Touch Bar.
Paid software, can be downloaded via Setapp Cross-platform RSS reader (need to purchase separately for iOS), can be synced via iCloud.
Subscription software, which can be unlocked through Setapp Accounting software on Mac, fully functional, can add various types of accounts, and can generate reports in various formats.
Paid software, there is [student discount] (https://www.apple.com/us-edu/shop/product/BMGE2Z/A/pro-apps-bundle-for-education) Apple’s professional video production software, equivalent to an upgraded version of iMovie, can be More custom settings, the ability to process RAW video recorded by the camera, the ability to create 360-degree videos, and more. Compared with Adobe Premiere Pro CC, it will be easier to use and the price is relatively cheap.
Subscription software, Minato Photography Plan, including the latest version of Lightroom Classic CC, Lightroom CC, Photoshop CC and 20GB cloud storage. The image processing software Apple developed, iPhoto has been integrated into the Photos (picture) software, and the professional software Aperture has also stopped maintenance. While the Photos software is good enough to handle the vast majority of photos, it’s still weak for RAW photos. And its cloud sync syncs all photos and videos in the library, which is not economical for lossless pictures. The Lightroom series software is similar to Photos, iPhoto, and Aperture, and is also used to manage photos. There are two current versions: Classic CC, only on the desktop side, suitable for storing the original files of all pictures locally; CC, with desktop, mobile, and web terminals, suitable for storing the original files of all pictures in the cloud. Personally, I tend to use Classic CC on desktop and CC on mobile. Because the lossless images in RAW format or TIFF format that are often processed are large, it is not economical to upload the original image to the cloud. Moreover, I mainly process images on the desktop side. The desktop side of CC feels that it is not so convenient to use it on the desktop side in order to accommodate other platforms. Moreover, Classic CC can also transmit the compressed preview to the cloud, and can also use CC to synchronize editing on other platforms. For simultaneous previews, the Photography Plan’s 20GB space is sufficient. Note: The Creative Cloud function in China has been castrated, and there are no preferential plans. It is recommended to buy Docklands (cheapest).
As with Windows, installing software from the Internet on a Mac requires a lot of attention to where the software comes from. It is highly recommended to select “App Store and by any developer” in “Security & Privacy” in Mac settings (default) instead of “Any source”. If any source is allowed, your computer will be able to run tampered software, unauthenticated software, and malware.
The vast majority of Mac software can run at this level of security, and none of the software recommended above does not require “Any Source” to be enabled, unless you’re not downloading the original software, but tampered with. If the option here is “Any Source” on your Mac, run the following command in Terminal to re-enable this security setting.
sudo spctl --master-enable
At the same time, it is not recommended to turn off the SIP function. The SIP function helps to ensure the integrity of the Mac system. To check if SIP is running, use the following commands:
csrutil status
If SIP is enabled (default), you will get System Integrity Protection status: enabled.
result.
The above data loss has brought huge losses to people, but if we can make data backups in advance, we can reduce the loss of data later.
In the process of backup, we should implement the 321 principle, so as to ensure the reliability and effectiveness of the backup.
In addition, you should also pay attention to the timeliness of backups, and if possible, try to shorten the backup cycle. For example, a minute-by-minute backup is more time-sensitive than an hourly backup. When data is lost, the former will only lose the last 1 minute of work, while the latter will lose the last 1 hour of work.
Generally, cloud backups are very reliable. The cloud server will help you do the 321 principle. You only need to choose a cloud storage service provider and upload the files to be backed up.
It is recommended to choose a service provider that provides an automatic backup function, which can save you the step of manually selecting files to upload. Usually automatic backup also performs incremental backup of files, that is, each backup uploads only the files that have changed since the last backup, which can greatly save upload time.
A typical example of cloud backup is the iCloud backup function in iOS. After this function is enabled, the iOS device will automatically upload personal data such as pictures, contacts, documents, chat records, and software archives to the cloud. After purchasing a new iOS device, this data can be automatically restored to the new device from the cloud.
Regularly package and upload important files on the server to object storage for simple backup. You can directly use the object storage of Amazon S3, Google Cloud Storage, Alibaba Cloud OSS, and Tencent Cloud COS. The above services all provide 99.999999999% durability, that is, once the file is uploaded, it is almost impossible to accidentally lose it. Object storage in cloud services is usually stored in multiple Availability Zones (usually at least three) within a region, and each Availability Zone also contains multiple copies of files. There is a certain distance between the various Availability Zones, so this achieves off-site. About regions and availability zones, you can refer to this AWS article for details.
Object storage for cloud services generally has a choice of regions. The geographically closest region is usually chosen for the lowest latency. These services are usually billed on a per-use basis, which mainly includes storage space occupied within a certain period of time and traffic charges for transferring data. For example, if you want to back up 1GB of data, it may only cost a few dollars or cents per month, or even free. (different service providers charge different)
The software on many servers has integrated the function of using object storage for backup: after the service provider has activated the object storage, you only need to configure the authorization key in the software to use the object storage for backup. If the software does not integrate this backup function, a simple backup can also be implemented manually. For example, use mysqldump
to export database files, and use gzip
and tar
commands to compress and package files for backup. Usually object storage providers also provide command-line tools that can be used to simply upload files to object storage. For example, AWS has aws
, which supports S3 operations; Google Cloud Storage has gsutil
; Alibaba Cloud OSS has ossutil
; Tencent Cloud has tccli
, which supports COS operations.
Usually a snapshot backs up all data on the server’s disk. Restoring from snapshots is also very convenient. This doesn’t even require software on the server to support backup functionality. Whether the server disk is damaged, system files are lost, or files are deleted, you can recover from a snapshot.
If possible, it is recommended to use both object storage backup and snapshot backup.
Although cloud backup is simple and durable, the speed of backing up large volumes of data is proportional to bandwidth. What’s more, what the backup needs is the uplink bandwidth, which is usually a fraction of the downlink bandwidth that the operator claims. And just use an ordinary mechanical hard disk for backup, and its speed has reached the speed of a gigabit network, and the gigabit network penetration rate is low and the price is extremely expensive. Therefore, local backup may be more appropriate when encountering one or more situations such as large data volume, limited bandwidth, and limited backup/restore time.
For local backup, you need to do the 321 principle yourself. You’ll need to back up your data to two hard drives (either via a LAN or wired connection) and store one of the hard drives offsite. Many desktop operating systems support backup, you can find the backup function in the control panel of the latest Windows system, and use Time Machine to back up on macOS. It is recommended to configure automatic backup.
If conditions permit, it is recommended to back up more important files to the cloud while backing up locally.
We should keep earlier versions of files in case we need them. Keeping multiple historical versions of files Using incremental backups can help save storage space. If possible, keep as many historical versions as possible.
Earlier versions can have relatively longer intervals in order to save space: like Time Machine in macOS, it keeps hourly backups for the past 24 hours, daily backups for the past month, and Take as many weekly backups as possible over a month, until the disk space fills up.
Some network storages automatically keep historical versions, such as Dropbox, Google Drive, iCloud, etc. Some software also keep historical versions on the local disk. For example, Git keeps a history of every commit.
It is recommended to keep historical versions for important files first, and if possible for all files.
Generally, object storage provides multiple storage categories, and different storage categories have different pricing and usage scenarios. Reasonable use of multiple storage categories can save expenses.
Sorted by storage price from high to low, the durability is 99.999999999%, and they are all available in multiple availability zones.
Backups larger than 128kb and infrequently accessed are recommended to be stored in STANDARD_IA, and early historical versions that are rarely accessed can be stored in GLACIER.
Sorted by storage price from high to low, the durability is 99.999999999%, and they are all available in multiple availability zones.
Likewise, infrequently accessed backups are recommended to be stored in Nearline, and earlier historical versions that are rarely accessed can be stored in Coldline.
Sorted by storage price from high to low, the durability is 99.999999999%, and they are all available in multiple availability zones.
Similarly, it is recommended to store infrequently accessed backups in the infrequent access type, and the early historical versions that are rarely accessed again can be stored in the archive type.
]]>From the previous article it is known that when you visit a website, such as www.example.com
, the browser sends A DNS message is queried to a DNS cache server. Due to the huge size of the DNS system, it needs to go through several layers of DNS cache servers. To access this website correctly, all cache servers need to respond correctly.
The DNS query is transmitted in clear text, which means that the middleman can change it during the transmission process, or even automatically determine different domain names and then do special processing. Even using other DNS cache servers, such as Google’s 8.8.8.8
, the man-in-the-middle can directly intercept IP packets to forge the response content. Since my country is facing this problem, I can easily show you what happens after a man-in-the-middle attack:
$ dig +short @4.0.0.0 facebook.com243.185.187.39
Sending a DNS request to an IP address that does not point to any server: 4.0.0.0
should get no response. But it actually returns a result in the country I’m in, and it’s clear that the packet was “manipulated” in transit. So if there is no man-in-the-middle attack, the effect is like this:
$ dig +short @4.0.0.0 facebook.com;; connection timed out; no servers could be reached
The DNS system is so fragile. Like any other Internet service, network service providers, router administrators, etc. can act as “middlemen” to collect and even replace and modify the data packets transmitted between the client and the server. As a result, the client gets incorrect information. However, through certain encryption methods, the middleman can be prevented from seeing the data content transmitted on the Internet, or it can be known whether the original data has been modified by the middleman.
When it comes to DNSSEC, we have to talk about some knowledge of cryptography. Here we start with the most basic cryptography. Cryptography is mainly divided into three categories, and here are the commonly used encryption algorithms for each column:
Symmetric Cryptography: AES, DES
Public Key Cryptography: RSA, ECC
Data integrity algorithm: SHA, MD5
In DNSSEC, two types of cryptography, public key cryptography and data integrity algorithms, are mainly used.
Public key cryptography is mainly distinguished from symmetric cryptography: symmetric cryptography uses the same key for encryption and decryption; while public key cryptography uses an encryption key called a public key and a decryption key called a private key—the two keys are relatively independent , cannot replace the location of the other party, and knowing the public key cannot deduce the private key. Both cryptography must be reversible (so the decryption algorithm can be seen as the inverse of the encryption algorithm). Expressed in the form of a function as follows:
ciphertext = encryption algorithm(key, original)
original text = decryption algorithm(key, ciphertext)
Ciphertext = encryption algorithm (public key, original text)
original text = decryption algorithm (private key, ciphertext)
Of course, if the private key acts as the public key and the public key acts as the private key, then this is it:
Ciphertext = encryption algorithm (private key, original)
original text = decryption algorithm (public key, ciphertext)
If the server wants to send a message to the client, the server has the private key and the client has the public key. The server encrypts the text using the private key and transmits it to the client, which decrypts it using the public key. Since only the server has the private key, only the server can encrypt the text, so the encrypted text can authenticate who sent it and ensure the integrity of the data, so encryption is equivalent to adding a _ number to the record sign_. But it should be noted that since the public key is public, the data just cannot be tampered with, but can be monitored. If the server here acts as a DNS server, it can bring this feature to the DNS service, but a problem arises, how to transmit the public key? If the public key is transmitted in plaintext, the attacker can directly replace the public key with his own, and then tamper with the data.
Therefore, a solution is to use a recognized public key server, and the client’s operating system stores the public key of the public key server itself locally. When communicating with the server, the client communicates from this recognized public key server, the user uses the public key built into the operating system to decrypt the public key of the server, and then communicates with the server.
However, DNS is a huge system in which the root name server acts as a recognized public key server, and each of the first-level name servers is also a sub-key server. The last picture is the basic prototype of DNSSEC.
#### Data Integrity Algorithm to Reduce the Computational Pressure of Public Key CryptographyIn cryptography, there is also an algorithm for checking the integrity of data. Its “encryption” does not require a key, the ciphertext is irreversible (or difficult to invert), and the ciphertext does not have a one-to-one correspondence with the original text. Moreover, the ciphertext calculated by this algorithm is usually a fixed-length content. The ciphertext calculated by this algorithm is called a hash value. Its characteristics used in DNSSEC are: once the original text is modified, the ciphertext will change. A very important problem with public key cryptography: the speed of encryption and decryption is too slow compared to symmetric encryption. So to improve performance, you need to shorten the text that needs to be encrypted and decrypted. If only the hash value of the text is encrypted, the encryption speed can be greatly improved due to the reduction of the length. When the server transmits, the plaintext text and the hash value of the text encrypted with the private key are transmitted at the same time; the client only needs to calculate the hash value of the received plaintext text, and then decrypt the ciphertext with the public key to verify the two values. Whether it is equal, it can still prevent tampering. This approach is used in DNSSEC, both for encryption of keys and records.
DNSSEC is an extension that adds verification information to DNS records, so that software on cache servers and clients can verify the obtained data and conclude that the DNS result is true or false. As mentioned in the previous article, DNS resolution starts from the root domain name server and goes down one level at a time, which is exactly the same as the chain of trust in DNSSEC. Therefore, the deployment of DNSSEC must also start from the root name server. This article also starts with the root domain name server.
DNSSEC and HTTPS are two completely different things, but here is just a comparison of their encryption methods. That is, the encryption method of DNSSEC is compared with TLS.
When configuring DNSSEC, if you compare it with HTTPS, you can see that the certificate and private key are all directly generated on your own server, which means that this is “self-signed” and does not require any “root certificate issuance”. business”. The owner of the second-level domain name submits the hash value of his public key to the first-level domain name registrar, and then the first-level domain name registrar will sign your hash value, which can also form a chain of trust, which is far better than HTTPS. The chain of trust is simple, and the operating system no longer needs to have so many CA certificates built-in, and only needs a DS record of the root domain name. Personally think that this is a more advanced mode, but it requires the client to parse it step by step, so it is affected by the speed; HTTPS directly returns the entire certificate chain from a server to connect with the server HTTPS only need to communicate with one server. However, DNS records can be cached, so DNSSEC latency can be reduced to a certain extent.
The request you send to the DNS server is in plaintext, and the request returned by the DNS server is in plaintext with a signature. In this way, DNSSEC only ensures that DNS cannot be tampered with, but can be monitored, so DNS is not suitable for transmitting sensitive information, but the actual DNS records are almost not sensitive information. HTTPS will simultaneously sign and encrypt in both directions, so that sensitive information can be transmitted. DNSSEC’s signature only, not encryption is mainly because DNSSEC is a subset of DNS and uses the same port, so it is made to be compatible with DNS, and DNS does not require the client to establish a connection with the server, only the client The client sends a request to the server, and the server returns the result to the client. Therefore, DNS can be transmitted by UDP without TCP handshake, and the speed is very fast. HTTPS is not a subset of HTTP, so it uses another port. In order to do encryption, you need to negotiate a key with the browser first. There are several handshakes between them, and the delay also goes up.
All of the cases just described are in the absence of a DNS cache server. What if there is a DNS cache server? In fact, some DNS cache servers already perform DNSSEC validation, even if the client does not support it. If the verification fails on the cache server, the parsing result will not be returned directly. Do DNSSEC validation on cache servers with little added latency. But there is also a problem, what if the line between the cache server and the client is not secure? So the safest way is to do an authentication on the client side as well, but this adds latency.
One of the features of DNSSEC compared to HTTPS is that DNSSEC can be cached, and even if it is cached, the authenticity of the information can be verified, and no middleman can tamper with it. However, since it can be cached, a cache duration should be specified, and this duration cannot be tampered with. The signature is time-sensitive, so that the client can know that it has obtained the latest record, not the previous record. If there is no timeliness, the IP that your domain name resolves to is changed from A to B, and anyone can easily get A’s signature before the change. The attacker can save the signature of A. When you change the IP, the attacker can continue to tamper with the IP of the response to A, and continue to use the original signature of A, and the client will not notice it, which is not expected. However, in the actual RRSIG signature, it will contain a timestamp (not a UNIX timestamp, but an easy-to-read timestamp), such as 20170215044626, which represents UTC 2017-02-15 04:46:26, this timestamp Refers to the expiration time of this record, which means that after this time, the signature is invalid. The timestamp is added to the encrypted content to participate in the calculation of the signature, so that an attacker cannot change the timestamp. Due to the existence of timestamps, there is a limit to how long a DNS response can be cached (the duration is the invalidation timestamp minus the current timestamp). However, before DNSSEC, the control cache duration was determined by TTL, so to ensure compatibility, the two durations should be exactly the same. By doing this, it can be compatible with the existing DNS protocol, can ensure security, and can also use cache resources to speed up client requests, which is a perfect solution.
In fact, even if you don’t understand or understand the above content at all, it will not affect your deployment of DNSSEC, but you should treat it very carefully. A little carelessness may cause users to be unable to resolve the situation.
Since third-party DNS is used, deploying DNSSEC requires third-party support. Common third-party DNS supporting DNSSEC are Cloudflare, Rage4, Google Cloud DNS (application required), DynDNS, etc. Before enabling DNSSEC, you first need to activate DNSSEC on the third-party service, so that the third-party DNS will return records about DNSSEC. After activating DNSSEC on the third-party DNS, the third-party will give you a DS record, which looks like this:
tlo.xyz.3600 IN DS 2371 13 2 913f95fd6716594b19e68352d6a76002fdfa595bb6df5caae7e671ee028ef346
At this point, you need to go to the domain name registrar and provide this DS record for your domain name (some domain name registrars may not support adding DS records, you can consider transferring to [domain name registrar on this site](https:/ /domain.tloxygen.com) or other registrars that support DS records. In addition, some domain name suffixes do not support adding DS records. It is recommended that you use mainstream suffixes, such as .com, etc. Here, the domain name registrar of this site is used as example):
Add and save and everything is OK. Note that Key Tag is the first item in the DS record (corresponding to 2371 here), Algorithm is the second item (corresponding to 13 here), and Digest Type is the third item Item (corresponding to 2 here), Digest is the last item. The rest of the content does not need to be filled out. Some third-party DNS (such as Rage4) will give you multiple DS records at once (same key label but different algorithm and algorithm type), but you don’t need to fill in them all. I recommend only filling in DS using Algorithm 13 with Type 2 or Algorithm 8 with Type 2. These two are the parameters recommended by Cloudflare and the parameters currently used by the root domain name. Filling in multiple DS records won’t give you much security, but may increase the computational load on the client side.
To use self-built DNS, you first need to generate a pair of key pairs, and then add them to the DNS service. I have introduced How to add DNSSEC on PowerDNS. After this, you need to generate DS records, usually you generate a lot of DS records, as with third-party DNS, I recommend only submitting DSs using Algorithm 13 and Type 2 or Algorithm 8 Type and Type 2 to the domain registrar.
Cloudflare has many nodes, but sometimes too many nodes are not a good thing - nodes between most CDNs are relatively independent. The first thing to understand is how CDNs work. CDNs generally do not pre-cache content, but instead cache cacheable content while acting as a proxy for visitors to access. Take this site as an example, this site uses [Hong Kong virtual host] (https://domain.tloxygen.com/web-hosting/index.php), if a visitor from London, UK visits my website, then Since my website is cacheable, it will connect to the London node and be cached there. So what if a visitor from Manchester, UK visited? Since the CDN has another node in Manchester, the visitor will directly connect to the Manchester node. However, there is no cache in Manchester, so the node will return to Hong Kong. And obviously, if Manchester goes back to London, it would be faster to use London’s cache. To sum up, if resources can be selectively obtained from other nodes, the TTFB will be lower and the cache hit rate will be increased accordingly. But the general CDN will not do this, because the nodes are independent of each other, and the nodes do not know whether the other party has cached. The general solution is to first pass a few nodes between the node and the source station. These nodes may only be distributed in a few states. For example, there is only one such node in the entire Europe. In this case, after a visitor in London visits, it is also cached by the node in Europe. In this way, when visitors from other parts of Europe connect to a node that does not have a cache, those nodes will directly serve the cache of that node in Europe. CloudFront and KeyCDN take advantage of this technology. How Cloudflare is implemented they do not officially explain in detail. However, in the actual test, no significant increase in the cache rate was observed, which is far less than the effect of CloudFront. The figure below is the TTFB tested by these nodes, and the requests are initiated one by one.
Usually, the connection between the node and the source station is direct, and the network between them depends largely on the network access of the host. However, with Argo Smart Routing, Cloudflare uses its own routing. Image via Cloudflare.com.
When requesting a test address from abroad, the via field is the IP address of the connection between Cloudflare and this site. Through the GeoIP service query, it is found that it is the IP of Hong Kong. Cloudflare establishes a long connection between its own nodes, and also establishes a connection with the origin site in advance on the server closest to the origin site. In this way, the time required for the first connection can be greatly reduced. If the back-to-source is HTTPS, the effect is more obvious. Another test address of mine does not have this function turned on. For comparison, its back-to-source and the IP established by this site are not from Hong Kong.
The speed has indeed improved to a certain extent, but it is not particularly obvious, and it seems that some nodes are more unstable after opening it - originally a relatively stable speed, after opening this, some nodes suddenly become faster and slower. It seems that the best way to speed up is half-pass encryption.
#Cloudflare Railgun
Railgun is Cloudflare’s ultimate acceleration solution for Business and Enterprise customers. To use it, you first need to upgrade your website plan to Business or Enterprise, then you need to install the necessary software on your server and configure it on Cloudflare. This is equivalent to a bilateral acceleration software. The implementation principle is to let the server establish a long-term TCP encrypted connection with Cloudflare, using Railgun’s unique protocol instead of the HTTP protocol, which obviously reduces the connection delay. In addition, it also caches dynamic pages: considering that most dynamic pages contain a lot of the same HTML information, when the user requests a new page, the server will only send the content that has changed. This is equivalent to a multiple Gzip compression.
Officially, using Railgun can achieve 99.6% compression ratio and achieve twice the speed. The actual experience is also true:
The acceleration effect of Railgun is still very obvious, obviously stronger than Argo.
Argo is not as easy to use as you think, and the starting price of $5/mo and $0.10/GB of data is not cheap. Of course, it may take a while for Argo to analyze the line delay in order to optimize it better. This article is expected to be updated in a month’s time. The effect of Railgun is still extremely significant, but it requires the Enterprise Edition package to be used, and it is not close to the people.
Latency: Google Cloud CDN has the lowest latency, followed by Cloudflare Railgun. Traffic: For a normal dynamic CMS, Cloudflare Railgun can save roughly 10 times more traffic than Google Cloud CDN can do. When I tested Google Cloud CDN in Comparison of several full-site CDNs at home and abroad, I was surprised by its extremely low TTFB. After careful study, it is found that the node establishes a long connection with the host, and will maintain it for a long time. In addition, all networks go through the Google intranet, which is similar to Argo and Railgun in nature. Therefore, the fastest service for dynamic content should be Google Cloud CDN, and Railgun is basically equivalent.
The Regional Edge Caches included with CloudFront are better than Argo Tiered Cache and Railgun at caching static content and improving cache hit rate, but Argo Smart Routing is more effective at serving dynamic non-cacheable content Take advantage. Railgun and Google Cloud CDN have no specific optimizations other than caching at the edge nodes.
The resolution of this site does not use Cloudflare but the self-built DNS, because my Cloudflare domain name is accessed through CNAME. The IP assigned by Cloudflare will not change for a long time, so I directly set its IP to the overseas line. The purpose of using the self-built DNS is to configure CDN lines for domestic partition resolution after filing. PS: Everyone should know that enabling this function will not improve the speed of connecting to Cloudflare in China. If you want to use Cloudflare and want to be faster in China, the origin site is best to use the West Coast of the United States.
]]>StatusCake offers both free and paid monitoring services. The free version can create an unlimited number of HTTP(s), TCP, DNS, SMTP, SSH, Ping and Push protocol monitoring. The shortest monitoring period is 5 minutes. The reminder mainly supports E-mail and Webhook. The free version does not support monitoring server configuration information, so there is no need to install any software on the server.
In addition, StatusCake supports Public Reporting, you can use StatusCake to build a monitoring page. It also supports embedding online rate images and web pages in your own web pages, which is very convenient.
UptimeRobot also provides free monitoring services, supports HTTP(s), port detection, and Ping, and the monitoring period is as short as 5 minutes. There is also no support for monitoring server configuration information, so there is no need to install any software on the server. A maximum of 50 monitors can be created, and E-mail reminders are supported.
Compared to StatusCake, it has less monitoring features, but the Public Reporting page is a bit prettier. Since many of the features of StatusCake are hardly used by individual webmasters, UptimeRobot is also a good alternative to StatusCake.
Finally, I’d like to introduce the powerful Stackdriver that I recently started using. Stackdriver is a server monitoring service under Google Cloud Platform (hereinafter referred to as GCP), which supports monitoring, debugging, tracing, and logging. Among them, Uptime Check supports HTTP(s) and TCP, and the shortest monitoring period is 1 minute. It supports E-mail, SMS and in-app alerts. Its Uptime Check is monitored from 6 regions around the world at the same time, and you can see the delay in each region. The speed used to detect the CDN would certainly be good.
If you are using Google Compute Engine (hereinafter referred to as GCE) or other GCP services, then this service can also record server logs for you, with a monthly allowance of 5 GB, and $0.5/GB for each extra. In addition, it can monitor server performance and monitor various indicators of the server. Although the original GCE panel can also provide information such as CPU, but this requires the installation of Agent on the server, so it can provide richer and more accurate information. The installation process is as follows:
curl -O https://repo.stackdriver.com/stack-install.shsudo bash stack-install.sh --write-gcmcurl -sSO https://dl.google.com/cloudagents/install-logging-agent.shsudo bash install-logging-agent.sh # install carefully, see below
After all, it is Google’s own service, and installation does not require any login and other operations. Data is collected automatically after installation. It is divided into two parts. Stack Monitoring is used to monitor server metrics, including hard disk storage space, memory usage (including Used, Buffered) and other data that GCE cannot monitor by default. There is also Logging, which can automatically synchronize logs to Google’s cloud, and you can perform searches and other operations centrally. However, the Logging process will occupy about 100M of memory, and small memory instances should be used with caution. It should be noted that Only Logging is free, and Monitoring is charged at $8 per month. Then, you can also create an alert for a certain metric, like I created an email me when disk space is above 90%. It can create icons for public pages to share server metrics and Uptime Check delays, but it can’t show the online rate, which is really a strange design.
Observium can be used to monitor various indicators of the server through snmp, including memory, storage, network card, etc. It can be installed for free on its own server and requires MySQL and PHP environments. The official website gives an example of Apache. If you use Nginx, you do not need to install Apache.
It displays the chart by generating PNG on the server, so the chart is very beautiful and delicate, but because it is not a vector diagram, it is difficult to achieve real-time incremental update and cannot see the value at a certain moment in detail. Observium can easily manage many servers. You can experience the Online Demo.
]]>Even if you don’t cache any content, a site-wide CDN has its advantages:
CloudFront has Amazon’s self-built network. The unit price is high but starts from 0 yuan, which is suitable for small and medium customers. This article will focus on how CloudFront and WordPress work together to achieve dynamic and static separation and cache HTML pages. Some other CDNs will be compared later.
Features of CloudFront as a site-wide CDN:
First go to the CloudFront Control Panel, click “Create Distribution”, select “Web”, and configure something like the following. Origin site configuration Note that the Origin Domain Name must be a full domain name, and if HTTPS is used, a valid certificate must be configured under that domain name.
Cache configuration, I added Host to the Header whitelist, and Cookie to add wordpress*
and wp*
to the whitelist.
Then front-end configuration, the certificate point “Request or Import a Certificate with ACM” can apply for a certificate issued by Amazon. Fill in the domain name of your website under CNAMEs.
Note that it may take less than an hour after creation to be accessed. In order to use the root domain name with CloudFront, I also had to change the DNS resolution of Route 53. Since this is a very high-precision GeoDNS, it is necessary to submit the resolution server to the major DNS cache servers, so that these cache servers can be added to the EDNS Client Subnet-enabled whitelist for your DNS cache server. Fortunately, Route 53 is one of the most popular GeoDNS, so if you use the NS records it gives you and don’t customize it, you don’t have to worry about it. When configuring the root domain name, simply select the A record, open Alias, and fill in the CloudFront domain name. If you want to support IPv6, then create another AAAA record. This way if you parse from the outside, you will parse directly to A records and AAAA records, not CNAMEs!
At this point, CloudFront is configured. CloudFront now automatically caches pages for about a week, so you need to configure the cache to clear when articles are updated. I wrote a plugin that cleans all caches when there is an article update/theme modification/kernel update, cleans the article page when there is a new comment, and controls the refresh frequency to 10 minutes (this is due to the fact that CloudFront refreshes the cache is surprisingly slow, and Flushing the cache is only free for the first thousand times). Welcome to use a plugin I made. However, the access speed of CloudFront in China is not as good as the GCE I used before. What should I do? It doesn’t matter, Route 53 can use GeoDNS, I still resolve China and Taiwan to the original GCE, so the speed is only improved. Note that if you want to do this, the original server must also have a valid certificate (similarly, if your domain name has been filed, you can set it as the IP of the domestic CDN to achieve the effect of mixing domestic and international CDNs). CloudFront will affect the issuance of Let’s Encrypt, so you need to continue to implement port 80 file authentication by setting Behaviors and multiple origin servers. In the actual test, the IPv4 recognition rate of Route 53 for China is 100%, and the IPv6 recognition rate is not good.
Domestic analysis:
$ dig @8.8.8.8 +short guozeyu.com a104.199.138.99$ dig @8.8.8.8 +short guozeyu.com aaaa2600:9000:2029:3c00:9:c41:b0c0:93a12600:9000:2029:9600:9:c41:b0c0:93a12600:9000:2029:2a00:9:c41:b0c0:93a12600:9000:2029:1600:9:c41:b0c0:93a12600:9000:2029:c00:9:c41:b0c0:93a12600:9000:2029:ce00:9:c41:b0c0:93a12600:9000:2029:6400:9:c41:b0c0:93a12600:9000:2029:ac00:9:c41:b0c0:93a1
Analysis of international countries:
$ dig @8.8.8.8 +short guozeyu.com a52.222.238.23652.222.238.22752.222.238.20752.222.238.10752.222.238.20852.222.238.7152.222.238.6852.222.238.67$ dig @8.8.8.8 +short guozeyu.com aaaa2600:9000:202d:5c00:9:c41:b0c0:93a12600:9000:202d:ec00:9:c41:b0c0:93a12600:9000:202d:7c00:9:c41:b0c0:93a12600:9000:202d:2a00:9:c41:b0c0:93a12600:9000:202d:9400:9:c41:b0c0:93a12600:9000:202d:c600:9:c41:b0c0:93a12600:9000:202d:f600:9:c41:b0c0:93a12600:9000:202d:6200:9:c41:b0c0:93a1
That’s right, the number of IPs that CloudFront assigns is too many, and it will be very powerful for others to see.
Only the speed of international countries is compared here.
It is only compared to the speed of international countries also.
TTFB is almost fully green after CDN is launched, and the time to establish TCP and TLS is significantly reduced.
The SSL certificate issued by CloudFront for free is a multi-domain wildcard certificate (Wildcard SAN), and the main name is customized, which is higher than Cloudflare’s shared certificate. Such certificates cost $10 per month on Cloudflare. Such certificates are hard to buy in the market, and the price depends on the number of domain names, ranging from thousands to tens of thousands a year. However, this certificate can only be used on AWS’ CloudFront and load balancers.
CloudFront’s long certificate chain affects TLS times, but since it’s also a CDN, this reduces TLS times to almost negligible levels. The main reason is that the Amazon Root CA is not directly trusted on macOS. If you trust it directly, you don’t need to do this.
All providers listed below (or some versions of a provider) meet the following criteria, or are listed separately if they do not:
It has a self-built network, the fastest speed and the lowest price, mainly provides website security protection, and of course also comes with a CDN. The NS service it provides is also the industry’s first speed (international). This site uses Cloudflare, and you are welcome to directly test the speed of this site.
Use the agent cf.tlo.xyz to access the domain name to Cloudflare CDN, which can realize CNAME/IP access, and also supports Railgun.
Features of Cloudflare as a site-wide CDN:
One difference is that Cloudflare issues a shared certificate, which looks like this:
I think Cloudflare issues shared certificates for two reasons. One is a legacy issue: Cloudflare Professional Edition’s SSL certificate service supports clients without SNI, and in order to support clients without SNI, only one certificate can be configured for one IP. Therefore, a shared certificate is used to save IP resources. And now the free version also has SSL. Although the free version uses SNI technology, the certificate cannot be more advanced than the paid version, so a shared SSL certificate is still used; the second is to add more value-added services, which is now available on Cloudflare Purchase Dedicated SSL Certificates and implement independent certificates (if the paid version is enabled, clients that do not support SNI still fall back to the shared certificate, so they are still compatible with devices that do not support SNI).
The introduction is the same as above. This version is suitable for large customers, and the greater the traffic, the more suitable it is. Baidu Cloud Acceleration is deeply integrated with Cloudflare. In fact, the infrastructure is exactly the same, but Baidu Cloud Acceleration does not provide an API interface.
Features of Cloudflare Enterprise Edition as a site-wide CDN:
Using the computer room managed by yourself, the network is somewhat limited by the environment in China, and the unit price is the lowest in the industry.
Features of UPYUN as a site-wide CDN:
Both UPYUN and the following KeyCDN issue independent certificates from Let’s Encrypt, but they are all single-domain certificates, and even those with www and without www have to apply separately.
It has the densest network cluster in the world, the fastest speed, lower unit price, mainly provides load balancing, SSL offloading, and of course comes with CDN. Due to the low cache hit rate, it is effective for websites that require very large traffic. It is precisely because of this that Google only uses this CDN system for the search service with a large number of users, and many other CDNs use Cloudflare and Fastly. Google’s network has intranet links to Cloudflare and Fastly’s networks. For details, see this article on this site
Features of Google Cloud CDN as a site-wide CDN:
They rent independent servers from others and provide integrated CDN services with the lowest unit price in the industry.
HTTPS is an encrypted HTTP protocol. The URL starts with https://
, which means that this protocol is used. Compared to HTTP, HTTPS has all of the following features:
Since HTTPS needs to complete authentication, if you need to configure HTTPS, you must obtain a certificate issued by a recognized certificate issuer.
To deploy HTTPS, you must have an SSL certificate, and the price range of SSL certificates ranges from several hundred or even tens of thousands of yuan per year. The high certificate price has become a major burden to deploy HTTPS. At the end of 2015, Let’s Encrypt officially started the public beta, and can issue certificates with multiple domain names for free. The original price of such certificates was around 100 to 1,000 yuan. Even in the beta phase, over 1 million certificates were issued in just 3 months! Let’s Encrypt’s certificate uses automatic deployment, and the verification and issuance process are automatically realized through API, which greatly shortens the time required to apply for a certificate; at the same time, various service providers have also provided channels to automatically issue Let’s Encrypt certificates. The popularity of HTTPS has indeed been greatly accelerated by the advent of Let’s Encrypt.
In early 2015, HTTP/2 officially became a standard, followed by major browsers and operating systems: Firefox 36, Chrome 41, iOS 9 & macOS 10.11 (Safari 9), Windows 10 (IE 11 & Edge). Then, CDN providers such as Cloudflare, CloudFront, and UPYUN also supported HTTP/2, and HTTP servers Nginx and Apache also supported it. HTTP/2 appeared to replace HTTP 1.1 and SPDY. HTTP/2 mainly supports multiplexing. The practice of merging CSS and JS files and preparing multiple domain names for many images is unnecessary after using HTTPS/2. Compared with HTTP 1.1, which requires a separate connection for each data, all data of a website in HTTPS/2 requires only one connection.
Since browsers limit the number of connections, in HTTP 1.1, only a few files can be downloaded at a time. Multiplexing allows these files to be transferred together, greatly reducing load times.
However, these browsers only implement HTTP/2 for HTTPS sites. So I thought of improving the loading speed of the website, and I had to use HTTPS. Therefore, the emergence of HTTP/2 also promoted the development of HTTPS.
Now, Chrome has started showing the word “secure” for websites that use HTTPS (the EV certificate shows the company name in this place):
In a future version, for websites without HTTPS, it will eventually be displayed like this (for all HTTP websites, the process of displaying different versions in the future is: gray exclamation mark, red alert exclamation mark, red alert exclamation mark + “insecure word” ; those with credit cards or passwords will be displayed first. The picture below is the final third stage):
You can also adjust this to “Always mark HTTP pages as non-secure” in the Chrome settings page. I recommend this setting for everyone, because HTTP is really insecure! Believe no company wants to let users see their website marked as “unsafe”? Browser advancement plays a vital role.
In the latest Chrome 58 version, the following information has been displayed at the non-HTTPS password input (here is the website login window of weibo.com):
It has been tested to show this information even if the form is submitted to an HTTPS page as long as the host is HTTP.
Apple started implementing ATS in late 2015, however developers still have the option to turn off the feature. After some time in 2017 or later (the specific deadline Apple has not yet clearly given, but it is certain that the review of not opening ATS will gradually become stricter, and more reasons will be required), all newly submitted APPs must enable ATS , which means that the newly submitted APP must all use HTTPS content. This has prompted many domestic manufacturers to do HTTPS support.
The recommended web hosting provider on this site TlOxygen now supports applying for a free SSL certificate. The whole process is simple and will automatically renew! Implementation method: automatically install the acme.sh software for the virtual host, and then automatically execute the installation process. In addition, the virtual host of TlOxygen supports SSH access, so you can also use acme.sh or any other tool to operate it yourself.
]]>In general, the services that need to be purchased for building a website also belong to these three services. From SaaS, Paas to IaaS, from ease of use to complexity, from low to high scalability. Understanding these three service types can help you choose the right service.
Some features that may be required: Based on the different functions to be implemented, there may be requirements in the following aspects, which need to be prioritized when selecting services.
The order in which they are listed is only relative to when they were added to this list, with new additions being placed at the end. The service providers added here are all recommended by individuals and will continue to be updated in the future. Annotation list:
The following CDNs all support HTTPS and IPv6
There are many more SaaS, too many things are SaaS, so here are only a few examples.
Or a distributed virtual host, but with independent CPU and memory resources.
Annotation list:
(U): Exclusive CPU³
(C): Cloud-based technology, high availability
(S): No billing after shutdown
Vultr (6, D⁶)
OVH (6², D, C⁴)
Google Compute Engine (D⁵, C, U, S)
Linode (6)
DigitalOcean (6)
AWS EC2 (S)
example.com
). The vast majority of things related to domain names are inseparable from it. for example:https://www.example.com
)@
is followed by the hostname, which is usually a domain name (eg webmaster@example.com
)The entire DNS has layers of complexity, which can be confusing to people who are just starting out buying domain names. This article will introduce the working principle of DNS in detail, which is helpful for a deeper understanding. This article will cover:
Also includes:
Let’s start with the local DNS.
Local DNS is much simpler than global DNS. So let’s start with the local DNS. 127.0.0.1
is often used as the loopback IP, which can also be used as the IP address used by the machine to access itself. Usually, this IP address corresponds to a domain name, called localhost
. This is a first-level domain (also a top-level domain), so it’s the same level as com
. How does the system map localhost
to 127.0.0.1
? In fact, it is through DNS. In an operating system, there is usually a hosts
file that defines a set of domain name to IP mappings. Common hosts file contents are as follows:
127.0.0.1 localhost::1 localhost
It defines the localhost
domain name corresponding to the 127.0.0.1
IP (the second line is the IPv6 address). In this way, when you visit this domain name from a browser, or execute Ping in the terminal, it will automatically query the hosts
file, and get the IP address from the file. In addition, the hosts
file can also control the IP address corresponding to other domain names, and can override its value in the global DNS or the local network DNS. However, the hosts
file can only control local name resolution. When the hosts
file appeared, there was no DNS, but it was arguably the predecessor of DNS. If you need to share the same DNS in a network, you need to use IP packets to obtain DNS records from a server. In a network (here mainly refers to the local network, such as all devices connected to a router in a home and the network composed of this router), there will be many hosts. When communicating with these hosts, it is more convenient to use a domain name. Typically, devices connected to the same router are set to one of the router’s own DNS servers. In this way, the resolved domain name can be obtained not only from hosts, but also from this server. Get DNS records from another IP through DNS query. DNS query is usually based on IP data packets such as UDP or TCP to achieve remote query. My PC’s network configuration is as follows, which was set up automatically after my PC was connected to the router:
The focus is on routers and search domains. The hostname of my computer (also the computer name) is set to ze3kr
, this content is also known by the router when connecting to the router, so the router assigns a domain name ze3kr.local
, local
to my host This first-level domain name is exclusively for local use. When all hosts in this network access the domain name ze3kr.local
, the router (10.0.1.1
) will inform the IP address corresponding to this domain name, so these hosts can get the IP address of my computer. As for the function of the search domain, it is actually possible to save the input of the complete domain name, such as:
$ ping ze3krPING ze3kr.local (10.0.1.201): 56 data bytes64 bytes from 10.0.1.201: icmp_seq=0 ttl=64 time=0.053 ms--- ze3kr.local ping statistics ---1 packets transmitted, 1 packets received, 0.0% packet lossround-trip min/avg/max/stddev = 0.053/0.053/0.053/0.000 ms
When the search domain is set, when you go to resolve the first-level domain name of ze3kr
, it will first try to resolve ze3kr.local
, and when it finds that there is a corresponding resolution in ze3kr.local
, it will stop further analysis. Use the address of ze3kr.local
directly. Now you understand how local DNS works. The basic working method of DNS is to obtain the IP corresponding to the domain name, and then communicate with this IP. When obtaining a full domain name (FQDN) locally (execute getaddrbyhost
), it is usually resolved through the DNS provided by the router itself. When the router receives a complete domain name request and there is no cache, it will continue to query the next-level cache DNS server (for example, provided by the operator, or provided by the organization, such as 8.8.8.8), and the next-level cache DNS server has no When cached, it is queried through the global DNS. The specific query method is described in “Global DNS”.
Locally, the local cache lookup record is read first, then the Hosts file is read, then the domain name is looked up in the search domain, and finally the DNS record is requested from the router.
In the global DNS, a complete domain name usually contains multiple levels, for example example.com.
is a second-level domain name, www.example.com.
is a third-level domain name. Usually the domain names we commonly see are full domain names.
The first-level domain name is divided into the following three parts:
arpa
domain: The root domain used to translate the IP address to the corresponding domain name.The domain names we usually see are normal domains and country domains, and the arpa
domain is used for reverse resolution from IP to domain name. In the local DNS, there is only a mapping relationship between domain names and IP addresses. However, in the global DNS, there are more types of resource records (RRs), not just the relationship between domain names and IPs. The following will introduce some of the most basic types of resource records:
A record: defines an IP address. (AAAA records define an IPv6 address) (RFC 1035)
Start with the root domain name, unnamed root can also be used as .
. Many domain names that you will see in the following sections end with .
. Domain names ending with .
refer to the full domain name under the root domain name. However, most domain names that do not end with .
are also full. Domain name, the .
at the end of the domain name is often omitted in actual use. In this article, I use dig, a commonly used DNS software, to do the query, and my computer is connected to the Internet. Suppose that currently this computer can communicate with IPs on the Internet, but has no DNS servers at all. At this point you need to know the root DNS server in order to obtain the IP address of a domain name yourself. The list of root DNS servers can be downloaded here. The contents of the file are as follows:
; This file holds the information on root name servers needed to; initialize cache of Internet domain name servers; (e.g. reference this file in the "cache . <file>"; configuration file of BIND domain name servers).;; This file is made available by InterNIC; under anonymous FTP as; file /domain/named.cache; on server FTP.INTERNIC.NET; -OR- RS.INTERNIC.NET;; last update: October 20, 2016; related version of root zone: 2016102001;; formerly NS.INTERNIC.NET;. 3600000 NS A.ROOT-SERVERS.NET.A.ROOT-SERVERS.NET. 3600000 A 198.41.0.4A.ROOT-SERVERS.NET.3600000 AAAA 2001:503:ba3e::2:30;; FORMERLY NS1.ISI.EDU;.3600000 NS B.ROOT-SERVERS.NET.B.ROOT-SERVERS.NET. 3600000 A 192.228.79.201B.ROOT-SERVERS.NET.3600000 AAAA 2001:500:84::b;; FORMERLY C.PSI.NET;.3600000 NS C.ROOT-SERVERS.NET.C.ROOT-SERVERS.NET. 3600000 A 192.33.4.12C.ROOT-SERVERS.NET.3600000 AAAA 2001:500:2::c;; FORMERLY TERP.UMD.EDU;. 3600000 NS D.ROOT-SERVERS.NET.D.ROOT-SERVERS.NET. 3600000 A 199.7.91.13D.ROOT-SERVERS.NET.3600000 AAAA 2001:500:2d::d;;FORMERLY NS.NASA.GOV;.3600000 NS E.ROOT-SERVERS.NET.E.ROOT-SERVERS.NET. 3600000 A 192.203.230.10E.ROOT-SERVERS.NET.3600000 AAAA 2001:500:a8::e;; FORMERLY NS.ISC.ORG;. 3600000 NS F.ROOT-SERVERS.NET.F.ROOT-SERVERS.NET. 3600000 A 192.5.5.241F.ROOT-SERVERS.NET.3600000 AAAA 2001:500:2f::f;; FORMERLY NS.NIC.DDN.MIL;. 3600000 NS G.ROOT-SERVERS.NET.G.ROOT-SERVERS.NET. 3600000 A 192.112.36.4G.ROOT-SERVERS.NET.3600000 AAAA 2001:500:12::d0d;; FORMERLY AOS.ARL.ARMY.MIL;. 3600000 NS H.ROOT-SERVERS.NET.H.ROOT-SERVERS.NET. 3600000 A 198.97.190.53H.ROOT-SERVERS.NET.3600000 AAAA 2001:500:1::53;; FORMERLY NIC.NORDU.NET;. 3600000 NS I.ROOT-SERVERS.NET.I.ROOT-SERVERS.NET. 3600000 A 192.36.148.17I.ROOT-SERVERS.NET.3600000 AAAA 2001:7fe::53;; OPERATED BY VERISIGN, INC.;. 3600000 NS J.ROOT-SERVERS.NET.J.ROOT-SERVERS.NET. 3600000 A 192.58.128.30J.ROOT-SERVERS.NET.3600000 AAAA 2001:503:c27::2:30;; OPERATED BY RIPE NCC;. 3600000 NS K.ROOT-SERVERS.NET.K.ROOT-SERVERS.NET. 3600000 A 193.0.14.129K.ROOT-SERVERS.NET.3600000 AAAA 2001:7fd::1;; OPERATED BY ICANN;. 3600000 NS L.ROOT-SERVERS.NET.L.ROOT-SERVERS.NET. 3600000 A 199.7.83.42L.ROOT-SERVERS.NET.3600000 AAAA 2001:500:9f::42;; OPERATED BY WIDE;. 3600000 NS M.ROOT-SERVERS.NET.M.ROOT-SERVERS.NET. 3600000 A 202.12.27.33M.ROOT-SERVERS.NET.3600000 AAAA 2001:dc3::35; End of file
Each line in this file is divided into 4 columns, which are the full domain name, resource type, time-to-live (TTL, which is the time that can be cached), and resource data. Through this file, you can know the complete domain names of the 13 root domain name servers corresponding to the root domain name, as well as the IP addresses corresponding to these 13 complete domain names. This is because NS records can only be set to complete domain names, so in order to inform NS of the corresponding IPs, additional data is needed to describe the IPs corresponding to these 13 complete domain names (these additional data are called Glue records). The 13 root nameservers are all independent and redundant (but the content returned should be the same), so that if one or some of these servers fail, resolution will not be affected. Once the root domain name cannot be resolved, all domain names will be inaccessible.
Glue record: If a complete domain name corresponding to the NS record is included in that domain name, then a Glue record needs to be added to specify the IP corresponding to that complete domain name. In fact, any full domain name belongs to the root domain name, so the root domain name must set the corresponding IP address for these NS records. Glue records are essentially A (or AAAA) records. (RFC 1033)
We assume that you have successfully obtained this file by some means, then the next step is to use the server here to resolve the root domain name and first-level domain name. The resolution of the root domain name is actually unnecessary, but we still analyze it for further analysis and obtain the latest and most complete data on the Internet. The record on the root domain name is based on the data of the root domain name resolved from the root domain name server, not the content in the file just now. The content of the file just now only informs the list of root domain name servers, that is, only the IP records of the full domain name corresponding to the NS record and the NS record, not all records under the root domain name. We first send a query to the IP of 198.41.0.4
for all records of the root domain name, any means to display any type of records, for the sake of convenience, some irrelevant responses have been deleted (if only the root domain name is resolved, this step cannot be skipped. If you need to directly resolve the first-level domain name, then this step can be skipped).
$ dig@198.41.0.4.any;; QUESTION SECTION:;. IN ANY;; ANSWER SECTION:. 518400 IN NS a.root-servers.net.. 518400 IN NS b.root-servers.net.. 518400 IN NS c.root-servers.net.. 518400 IN NS d.root-servers.net.. 518400 IN NS e.root-servers.net.. 518400 IN NS f.root-servers.net.. 518400 IN NS g.root-servers.net.. 518400 IN NS h.root-servers.net.. 518400 IN NS i.root-servers.net.. 518400 IN NS j.root-servers.net.. 518400 IN NS k.root-servers.net.. 518400 IN NS l.root-servers.net.. 518400 IN NS m.root-servers.net..86400 IN SOA a.root-servers.net.nstld.verisign-grs.com.2016122400 1800 900 604800 86400;; ADDITIONAL SECTION:a.root-servers.net.518400 IN A 198.41.0.4b.root-servers.net.518400 IN A 192.228.79.201c.root-servers.net.518400 IN A 192.33.4.12d.root-servers.net.518400 IN A 199.7.91.13e.root-servers.net.518400 IN A 192.203.230.10f.root-servers.net.518400 IN A 192.5.5.241g.root-servers.net.518400 IN A 192.112.36.4h.root-servers.net.518400 IN A 198.97.190.53i.root-servers.net.518400 IN A 192.36.148.17j.root-servers.net.518400 IN A 192.58.128.30k.root-servers.net.518400 IN A 193.0.14.129l.root-servers.net.518400 IN A 199.7.83.42m.root-servers.net.518400 IN A 202.12.27.33a.root-servers.net.518400 IN AAAA 2001:503:ba3e::2:30b.root-servers.net.518400 IN AAAA 2001:500:84::bc.root-servers.net.518400 IN AAAA 2001:500:2::cd.root-servers.net.518400 IN AAAA 2001:500:2d::de.root-servers.net.518400 IN AAAA 2001:500:a8::ef.root-servers.net.518400 IN AAAA 2001:500:2f::fg.root-servers.net.518400 IN AAAA 2001:500:12::d0dh.root-servers.net.518400 IN AAAA 2001:500:1::53i.root-servers.net.518400 IN AAAA 2001:7fe::53j.root-servers.net.518400 IN AAAA 2001:503:c27::2:30k.root-servers.net.518400 IN AAAA 2001:7fd::1l.root-servers.net.518400 IN AAAA 2001:500:9f::42m.root-servers.net.518400 IN AAAA 2001:dc3::35
Among them, you can see that the TTL is not the same as the content in the file just now, and there is an additional SOA record, so the result here is subject to. There is also a SOA record here. SOA records are ubiquitous. Please refer to the documentation for details. I won’t explain too much here.
SOA records: Specifies authoritative information about the DNS zone, including the primary name server, the domain administrator’s email address, the serial number of the domain name, and several timers for flushing the zone. (RFC 1035)
DNS zone: For the root domain name, the DNS zone is empty, which means it is responsible for all domain names on the Internet. For my website, the DNS zone is guozeyu.com.
, which manages records for guozeyu.com.
itself and its subdomains.
The ADDITIONAL SECTION here actually contains Glue records.
The DNS server of the root domain name itself is used not only to resolve the root itself, but also to resolve all first-level domain names on the Internet. You will find that almost all DNS servers, root DNS servers or not, resolve themselves and their subordinate domain names. As can be seen from the previous resolution results, the root domain name is not assigned to any IP address, but NS records are given, so we need to use these NS records to resolve its subordinate first-level domain names. Next, use one of the servers in the resulting root NS record to resolve a first-level domain name com.
.
$ dig @198.41.0.4 com any;; QUESTION SECTION:;com. IN ANY;; AUTHORITY SECTION:com. 172800 IN NS a.gtld-servers.net.com. 172800 IN NS b.gtld-servers.net.com. 172800 IN NS c.gtld-servers.net.com. 172800 IN NS d.gtld-servers.net.com. 172800 IN NS e.gtld-servers.net.com. 172800 IN NS f.gtld-servers.net.com. 172800 IN NS g.gtld-servers.net.com. 172800 IN NS h.gtld-servers.net.com. 172800 IN NS i.gtld-servers.net.com. 172800 IN NS j.gtld-servers.net.com. 172800 IN NS k.gtld-servers.net.com. 172800 IN NS l.gtld-servers.net.com. 172800 IN NS m.gtld-servers.net.;; ADDITIONAL SECTION:a.gtld-servers.net.172800 IN A 192.5.6.30b.gtld-servers.net.172800 IN A 192.33.14.30c.gtld-servers.net.172800 IN A 192.26.92.30d.gtld-servers.net.172800 IN A 192.31.80.30e.gtld-servers.net.172800 IN A 192.12.94.30f.gtld-servers.net.172800 IN A 192.35.51.30g.gtld-servers.net.172800 IN A 192.42.93.30h.gtld-servers.net.172800 IN A 192.54.112.30i.gtld-servers.net.172800 IN A 192.43.172.30j.gtld-servers.net.172800 IN A 192.48.79.30k.gtld-servers.net. 172800 IN A 192.52.178.30l.gtld-servers.net.172800 IN A 192.41.162.30m.gtld-servers.net.172800 IN A 192.55.83.30a.gtld-servers.net.172800 IN AAAA 2001:503:a83e::2:30b.gtld-servers.net.172800 IN AAAA 2001:503:231d::2:30
You can see that the result of the resolution is similar to that of the root domain name. There are also 13 domain name servers set up under com.
, but these 13 domain name servers are completely different from the root domain server.
There are also Glue records included in ADDITIONAL SECTION, however gtld-servers.net.
is not included under com.
. However, in reality, both the com.
and net.
domains belong to the same owner (Verisign), so this setting is fine.
Similar to the root domain name, the content resolved at this time is only the domain name server of com.
, not the record of com.
itself, ** the record on com.
, from the domain name of com.
The data of its com.
domain name resolved in the server shall prevail. ** So, let’s use the domain name server of com.
to resolve com.
itself, depending on the situation (if you only resolve the first-level domain name, this step cannot be skipped. If you need to directly resolve the second-level domain name, then this step can be skipped):
$ dig @192.5.6.30 com any;; QUESTION SECTION:;com. IN ANY;; ANSWER SECTION:com. 900 IN SOA a.gtld-servers.net. nstld.verisign-grs.com. 1482571852 1800 900 604800 86400com. 172800 IN NS e.gtld-servers.net.com. 172800 IN NS m.gtld-servers.net.com. 172800 IN NS i.gtld-servers.net.com. 172800 IN NS k.gtld-servers.net.com. 172800 IN NS b.gtld-servers.net.com. 172800 IN NS j.gtld-servers.net.com. 172800 IN NS a.gtld-servers.net.com. 172800 IN NS d.gtld-servers.net.com. 172800 IN NS g.gtld-servers.net.com. 172800 IN NS c.gtld-servers.net.com. 172800 IN NS h.gtld-servers.net.com. 172800 IN NS f.gtld-servers.net.com. 172800 IN NS l.gtld-servers.net.;; ADDITIONAL SECTION:e.gtld-servers.net.172800 IN A 192.12.94.30m.gtld-servers.net.172800 IN A 192.55.83.30i.gtld-servers.net.172800 IN A 192.43.172.30k.gtld-servers.net. 172800 IN A 192.52.178.30b.gtld-servers.net.172800 IN A 192.33.14.30b.gtld-servers.net.172800 IN AAAA 2001:503:231d::2:30j.gtld-servers.net.172800 IN A 192.48.79.30a.gtld-servers.net.172800 IN A 192.5.6.30a.gtld-servers.net.172800 IN AAAA 2001:503:a83e::2:30d.gtld-servers.net.172800 IN A 192.31.80.30g.gtld-servers.net.172800 IN A 192.42.93.30c.gtld-servers.net.172800 IN A 192.26.92.30h.gtld-servers.net.172800 IN A 192.54.112.30f.gtld-servers.net.172800 IN A 192.35.51.30l.gtld-servers.net.172800 IN A 192.41.162.30
Similar to the case of root domain name resolution, there is an additional SOA type record.
Similar to parsing the first-level domain name com.
, continue to use the domain name server of com.
to resolve guozeyu.com.
.
$ dig @192.5.6.30 guozeyu.com any;; QUESTION SECTION:;guozeyu.com. IN ANY;; AUTHORITY SECTION:guozeyu.com. 172800 IN NS a.ns.guozeyu.com.guozeyu.com. 172800 IN NS b.ns.guozeyu.com.guozeyu.com. 172800 IN NS c.ns.guozeyu.com.;; ADDITIONAL SECTION:a.ns.guozeyu.com. 172800 IN AAAA 2001:4860:4802:38::6ca.ns.guozeyu.com. 172800 IN A 216.239.38.108b.ns.guozeyu.com. 172800 IN AAAA 2001:4860:4802:36::6cb.ns.guozeyu.com. 172800 IN A 216.239.36.108c.ns.guozeyu.com. 172800 IN AAAA 2001:4860:4802:34::6cc.ns.guozeyu.com. 172800 IN A 216.239.34.108
The Glue record contained in ADDITIONAL SECTION also exists here because ns.guozeyu.com.
is under guozeyu.com.
. Similarly, records on guozeyu.com.
are based on the data of its guozeyu.com.
domain name resolved from the guozeyu.com.
domain name server. This parsing is especially necessary at this point, because guozeyu.com.
not only has SOA records, but also A records and other important records. Now use the domain name server of guozeyu.com.
to resolve guozeyu.com.
(If you only resolve the second-level domain name, this step cannot be skipped. If you need to resolve the third-level domain name, this step can be skipped):
$ dig @216.239.38.108 guozeyu.com any;; QUESTION SECTION:;guozeyu.com. IN ANY;; ANSWER SECTION:guozeyu.com. 21600 IN A 104.199.138.99guozeyu.com. 172800 IN NS a.ns.guozeyu.com.guozeyu.com. 172800 IN NS b.ns.guozeyu.com.guozeyu.com. 172800 IN NS c.ns.guozeyu.com.guozeyu.com. 21600 IN SOA a.ns.guozeyu.com. support.tlo.xyz. 1 21600 3600 259200 300guozeyu.com. 172800 IN MX 100 us2.mx1.mailhostbox.com.guozeyu.com. 172800 IN MX 100 us2.mx2.mailhostbox.com.guozeyu.com. 172800 IN MX 100 us2.mx3.mailhostbox.com.;; ADDITIONAL SECTION:a.ns.guozeyu.com. 604800 IN A 216.239.38.108a.ns.guozeyu.com. 604800 IN AAAA 2001:4860:4802:38::6cb.ns.guozeyu.com. 604800 IN A 216.239.36.108b.ns.guozeyu.com. 604800 IN AAAA 2001:4860:4802:36::6cc.ns.guozeyu.com. 604800 IN A 216.239.34.108c.ns.guozeyu.com. 604800 IN AAAA 2001:4860:4802:34::6c
It can be found that A, SOA and MX records have been added.
It is precisely because of the existence of the MX record that the mail sent to username@guozeyu.com
does not point to the IP address corresponding to guozeyu.com.
, but uses the server under mailhostbox.com.
. Of course, since I am the owner of guozeyu.com.
, I can also control the third-level or higher domains under guozeyu.com.
. For example www.guozeyu.com.
:
$ dig @216.239.38.108 www.guozeyu.com any;; QUESTION SECTION:;guozeyu.com. IN ANY;; ANSWER SECTION:www.guozeyu.com. 172800 IN CNAME guozeyu.com.guozeyu.com. 21600 IN A 104.199.138.99guozeyu.com. 172800 IN NS a.ns.guozeyu.com.guozeyu.com. 172800 IN NS b.ns.guozeyu.com.guozeyu.com. 172800 IN NS c.ns.guozeyu.com.guozeyu.com. 21600 IN SOA a.ns.guozeyu.com. support.tlo.xyz. 1 21600 3600 259200 300guozeyu.com. 172800 IN MX 100 us2.mx1.mailhostbox.com.guozeyu.com. 172800 IN MX 100 us2.mx2.mailhostbox.com.guozeyu.com. 172800 IN MX 100 us2.mx3.mailhostbox.com.;; ADDITIONAL SECTION:a.ns.guozeyu.com. 604800 IN A 216.239.38.108a.ns.guozeyu.com. 604800 IN AAAA 2001:4860:4802:38::6cb.ns.guozeyu.com. 604800 IN A 216.239.36.108b.ns.guozeyu.com. 604800 IN AAAA 2001:4860:4802:36::6cc.ns.guozeyu.com. 604800 IN A 216.239.34.108c.ns.guozeyu.com. 604800 IN AAAA 2001:4860:4802:34::6c
My third-level domain www.guozeyu.com.
uses a CNAME record to point to guozeyu.com.
, which means that all resource types under guozeyu.com.
are the same as guozeyu.com.
same. This is also the reason why CNAME cannot be used with any other record, the existence of CNAME supersedes any other record. Since SOA, NS and MX records often exist under the primary domain name, CNAME resolution cannot be used under the primary domain name. In addition, I can also set the third-level domain name under guozeyu.com.
to point to an NS record, so that I can use my third-level domain name for others to use.
Usually, all parsed records will be cached by the parsing server, router, client, and software. This can greatly reduce the number of requests. Any cached record will not be parsed again within the time specified by TTL, but will be read directly from the cache. Under normal circumstances, servers, routers, etc. should not expand the TTL value. For cached content, the TTL value is decremented by 1 every second.
$ dig guozeyu.com;; QUESTION SECTION:;guozeyu.com. IN A;; ANSWER SECTION:guozeyu.com. 21599 IN A 104.199.138.99;; Query time: 514 msec;; SERVER: 10.0.1.1#53(10.0.1.1);; WHEN: Sat Dec 24 20:00:29 2016;; MSG SIZE rcvd: 43$ dig guozeyu.com;; QUESTION SECTION:;guozeyu.com. IN A;; ANSWER SECTION:guozeyu.com. 21598 IN A 104.199.138.99;; Query time: 45 msec;; SERVER: 10.0.1.1#53(10.0.1.1);; WHEN: Sat Dec 24 20:00:30 2016
This is the result of parsing guozeyu.com.
directly through the router on my computer twice in a row. Obviously, the first resolution did not hit the cache. The router queries the carrier’s DNS server, and the carrier’s DNS resolves guozeyu.com.
from the root domain name level by level, which takes 514 milliseconds in total. Then guozeyu.com.
will be cached on both the carrier’s server and the router’s DNS server. When guozeyu.com.
was requested for the second time, the cache on the router was hit, so the router directly returned the parsing record, which took 45 milliseconds in total. The two queries are one second apart, so the TTL value is also subtracted by 1 .
arpa
Reverse ParsingReverse resolution is used to: given an IP address, return the domain name corresponding to the address. The function gethostbyaddr
uses this method to obtain the domain name corresponding to the IP. The first-level domain name arpa. It is used to set up reverse resolution. For the IPv4 address of my website 104.199.138.99
, the corresponding full domain name of arpa
is: 99.138.199.104.in-addr.arpa.
, through Get the PTR record of this domain name, you can get the full domain name corresponding to this domain name.
$ host 104.199.138.9999.138.199.104.in-addr.arpa domain name pointer 99.138.199.104.bc.googleusercontent.com.$ dig 99.138.199.104.in-addr.arpa.ptr +short99.138.199.104.bc.googleusercontent.com.
As you can see, the domain name of the IP address obtained by using the host
command is the same as that obtained by using dig
, both are 99.138.199.104.bc.googleusercontent.com.
. It can be seen that this IP belongs to Google. In addition, it should be noted that the domain name under in-addr.arpa. is exactly the opposite of the original IPv4 address arrangement. This record can usually be set by the owner of the IP. However, the IP owner can set this record to any domain name, including someone else’s. Therefore, it is not accurate to judge that it belongs to Google by this method, so we also need to verify it:
$ dig 99.138.199.104.bc.googleusercontent.com a +short104.199.138.99
It can be seen that the domain name has been resolved to the original IP again. At this time, the verification is completed and it is confirmed that the IP belongs to Google. So, after passing gethostbyaddr
, continue with getaddrbyhost
verification. However, who provides the parsing server for this parsing? Let’s take a look at the resolution record of the upper-level domain name of 99.138.199.104.in-addr.arpa.
:
$ dig 138.199.104.in-addr.arpa.any;; QUESTION SECTION:;138.199.104.in-addr.arpa.IN ANY;; ANSWER SECTION:138.199.104.in-addr.arpa. 21583 IN NS ns-gce-public1.googledomains.com.138.199.104.in-addr.arpa. 21583 IN NS ns-gce-public2.googledomains.com.138.199.104.in-addr.arpa. 21583 IN NS ns-gce-public3.googledomains.com.138.199.104.in-addr.arpa.21583 IN NS ns-gce-public4.googledomains.com.138.199.104.in-addr.arpa.21583 IN SOA ns-gce-public1.googledomains.com.cloud-dns-hostmaster.google.com.1 21600 3600 259200 300
It can be seen that Google’s domain name server is configured under the upper level 138.199.104.in-addr.arpa.
, so 104.199.138.0/24
(104.196.123.0 - 104.196.123.255
) belong to Google of. However, since the domain name server under this domain name can be set to Google’s own, any record can be set under this domain name, not only PTR, so you can also add A records to treat the arpa domain name as a website! I don’t have that large block of IPv4 addresses, but I have this IPv6 address: 2001:470:f913::/48
and can set NS records under arpa.
. This /48
IPv6 address corresponds to the full domain name of arpa.
reverse resolution is 3.1.9.f.0.7.4.0.1.0.0.2.ip6.arpa.
, you can see that the reverse resolution of IPv6 is in another secondary domain name ip6.arpa .
, in addition, its address is to split each hexadecimal value of IPv6 into a field and arrange it in reverse. I have set this address as the NS that I can control, and then configured the A record, and the reverse resolution domain name can be used as a normal resolution in an instant!
3.1.9.f.0.7.4.0.1.0.0.2.ip6.arpa
ze3kr.3.1.9.f.0.7.4.0.1.0.0.2.ip6.arpa
If you try to access both domains with a browser, you will get a certificate error report because I haven’t issued a certificate for this domain.
This article and the unpublished “Detailed Explanation of Domain Name Resolution System (DNS) - DNSSEC” have been selected into “Knock on the Door of the Internet World“
]]>AMP HTML is also a kind of HTML, a standard compliant AMP page can only use limited tags, and many tags are not HTML standard, such as <amp-img>
, <amp-video>
, etc. An example of Hello, AMPs:
<!doctype html><html amp lang="en"> <head> <meta charset="utf-8"> <script async src="https://cdn.ampproject.org/v0.js"></script> <title>Hello, AMPs</title> <link rel="canonical" href="http://example.ampproject.org/article-metadata.html" /> <meta name="viewport" content="width=device-width,minimum-scale=1,initial-scale=1"> <script type="application/ld+json"> { "@context": "http://schema.org", "@type": "NewsArticle", "headline": "Open-source framework for publishing content", "datePublished": "2015-10-07T12:02:41Z", "image": [ "logo.jpg" ] } </script> <style amp-boilerplate>body{-webkit-animation:-amp-start 8s steps(1,end) 0s 1 normal both;-moz-animation:-amp-start 8s steps(1,end) 0s 1 normal both; -ms-animation:-amp-start 8s steps(1,end) 0s 1 normal both;animation:-amp-start 8s steps(1,end) 0s 1 normal both}@-webkit-keyframes -amp-start{from {visibility:hidden}to{visibility:visible}}@-moz-keyframes -amp-start{from{visibility:hidden}to{visibility:visible}}@-ms-keyframes -amp-start{from{visibility:hidden }to{visibility:visible}}@-o-keyframes -amp-start{from{visibility:hidden}to{visibility:visible}}@keyframes -amp-start{from{visibility:hidden}to{visibility:visible} }</style><noscript><style amp-boilerplate>body{-webkit-animation:none;-moz-animation:none;-ms-animation:none;animation:none}</style></noscript> </head> <body> <h1>Welcome to the mobile web</h1> </body></html>
This code can be run directly in major browsers. Compared with normal HTML, it has the following differences:
<html>
to mark this as an AMP page<head>
: <script async src="https://cdn.ampproject.org/v0.js"></script>
amp-boilerplate
attribute, one of which is in <noscript>
.<head>
needs to have a <script type="application/ld+json">
to mark the metadata of the document.Example of inserting an image in AMP HTML:
<amp-img src="450px.jpg" srcset="450px.jpg 450w, 1200px.jpg 1200w"></amp-img>
You will find that this is actually no different from the syntax of the ordinary img tag, except that the name is changed. In fact, by introducing the image in this way, AMP Runtime (that is, the v0.js
referenced in the header) will automatically parse this AMP Components, and will select an image to load according to the resolution of the device (the principle is just to create an img tag through js to let the browser recognize it), and it does not even need the browser to support [srcset](https:/ /guozeyu.com/2015/08/using-srcset/) attribute; and can automatically lazy load images. Inserting video and audio also needs to be done using AMP-specific methods. If JS is disabled in the browser, all images and videos on the page that use non-standard HTML tags will not be displayed.
Any own JS is forbidden in AMP, this is just to keep the JS loading fast - since mobile devices are not very performant, minimizing the use of JS or avoiding the use of low quality JS can improve the user experience. So, if you want to use more interaction, you have to do it through features of AMP HTML Extensions, so you can only use Limited interactive functionality. Or you can implement various complex functions through pure CSS.
Validation of AMP pages is available online at this site
You will actually find that the only content in the page that may block loading is an external JS (that is, AMP Runtime), and this JS is also marked as asynchronous loading. All CSS is Inline.
To support AMP, each article needs to have two pages: a normal version of the page, and an AMP-compliant AMP page. Of course, it is also possible to make the page conform to the AMP specification by directly modifying the page itself, but this method will make more changes. The easiest way to implement AMP is through a plugin, especially if you’re using WordPress:
Automattic (actually WordPress official) has made an AMP plugin, after installing and activating this plugin, no configuration is required - this is one of the reasons I like this plugin One, you can activate the AMP function. This plugin will automatically generate the corresponding page for each article and add AMP metadata to the original page accordingly. After installation, there is no change on the surface, but when you add /amp/
or ?amp=1
after an article page, you can see the AMP page corresponding to the article. The reason why this plugin is free of any configuration is that what it actually provides is just a framework. Users (actually developers) can develop a plugin to customize it. The official has given a [specific introduction] (https://github.com/Automattic/amp-wp/blob/master/readme.md), I list some things I customized:
AMP already supports Google Analytics, but if you need to add your own statistics, you can’t use JS, but the easiest way is to add an empty Gif:
add_action( 'amp_post_template_footer', 'tlo_amp_add_pixel' );function tlo_amp_add_pixel( $amp_template ) {//$post_id = $amp_template->get( 'post_id' );$piwik_id = $GLOBALS\['wp-piwik'\]->getOption( 'site_id' );?><amp-pixel src="https://piwik.example.com/piwik.php?idsite=<?php echo $piwik_id; ?>&rec=1&action_name=TITLE&urlref=DOCUMENT_REFERRER&url=CANONICAL_URL&rand=RANDOM"></amp-pixel ><?php}
This is an example of using the Piwik tool for statistics, and with the WP-Piwik plugin, it can automatically obtain the site ID (requires manual replacement of the domain name).
After activating AMP for a period of time, using mobile devices to search for website content on Google can benefit significantly from AMP, the right side of the video contains a list of resources, using the Safari browser that comes with iOS 10 (do not Blink!):
It can be seen that while still staying in the search results, AMP content such as AMP Runtime has already started to download, and at the same time, the logo on my website and the first image in the webpage have also started to download. When the first search result is clicked, the page does not show any signs of reloading, and is directly presented in a similar way to ajax. At this time, the entire page has been preloaded, so the loading time is almost zero (due to the use of the iOS simulator, The configuration is lower, and the actual real machine test white screen time is shorter, it can be said that there is no perception). The URL domain name can be seen at the top of the page. When scrolling the page, you can see that the next few pictures are lazy loaded, and the delay loading time is just right, it is almost difficult to feel that it is lazy loading. It is worth noting that almost all the content of this page is provided by Google’s servers, including HTML and images (after testing, the video resources are still back to the source), only my statistical code is not proxied by Google. If the URL is shared at this time, the shared domain name is still Google’s, and when it is opened (or after the page is refreshed) the upper left corner will no longer be closed, and the rest is exactly the same. If this URL is opened on a non-mobile device, it will redirect to the original article (not AMP) page. After clicking the close button in the upper left corner, you will return to the search results page, and the whole process is very smooth. If the search results have multiple AMP pages, then entering those other pages will also take seconds. As you can imagine, a large part of the reason why AMP has so many restrictions is to prevent website owners from “doing evil” when rendering AMP pages on google.com, even if the browser is not controlled by Google (such as Safari).
Google’s AMP feature is very similar to Facebook’s Instant Article, except that the latter doesn’t even require its own server hosting. A large amount of CSS in AMP can still be controlled by itself, which is much better than the [transcoding page] (https://www.guozeyu.com/2015/08/block-haosou/) of most domestic search engines. Pages can still serve their own ad content, which is why AMP has been accepted so quickly. It also retains a lot of open stuff in various non-open restrictions. Enabling AMP can not only benefit from loading in Google search result pages, AMP pages themselves can actually be used as pages specially adapted for mobile devices - if the website itself does not have a mobile version page, then the mobile version page can be directly made with AMP, Then jump automatically. Of course, as I said before, web pages can be made to conform to the AMP specification by directly modifying the page itself, or it can be made in accordance with the AMP specification at the beginning of production. AMP official website is a Classic example. However, this approach is too radical, and the compatibility is not very good (such as its strong dependence on JS), so it should be fully considered before use.
Baidu borrowed many technologies from AMP and made MIP, which is very similar to AMP. After a simple experiment, I found that AMP can be easily changed to MIP, just need to replace or delete the following content on the AMP page:
<html amp
replaced with <html mip
<script src="https://cdn.ampproject.org/v0.js" async></script>
replace with <link rel="stylesheet" type="text/css" href ="https://mipcache.bdstatic.com/static/v1/mip.css">
amp-pixel
replaced with mip-pix
amp-
** replaced by ** mip-
</body>
replaced with <script src="https://mipcache.bdstatic.com/static/v1/mip.js"></script></body>
<style amp-boilerplate>body{-webkit-animation:-amp-start 8s steps(1,end) 0s 1 normal both;-moz-animation:-amp-start 8s steps(1, end) 0s 1 normal both;-ms-animation:-amp-start 8s steps(1,end) 0s 1 normal both;animation:-amp-start 8s steps(1,end) 0s 1 normal both}@-webkit- keyframes -amp-start{from{visibility:hidden}to{visibility:visible}}@-moz-keyframes -amp-start{from{visibility:hidden}to{visibility:visible}}@-ms-keyframes -amp- start{from{visibility:hidden}to{visibility:visible}}@-o-keyframes -amp-start{from{visibility:hidden}to{visibility:visible}}@keyframes -amp-start{from{visibility:hidden }to{visibility:visible}}</style><noscript><style amp-boilerplate>body{-webkit-animation:none;-moz-animation:none;-ms-animation:none;animation:none}</ style></noscript>
The modified AMP page can directly pass the MIP test, which is equivalent to being compatible with both AMP and MIP without secondary development. How convenient! MIP-enabled websites are also preloaded on Baidu search results, so they can be opened in seconds.
The CSS of MIP comes with actually declares this style:
* { margin: 0; padding: 0}
Because of its existence, it will cause typography problems in many places, and will override the browser’s default styles (such as h1, h2 tags), so you may also add some styles for Baidu separately.
After testing, the <mip-video>
tag cannot support the <source>
tag, and videos using this tag cannot be loaded.
Load Balancing supports more advanced load balancing functions, and finally supports the cross-region load balancing function that everyone needs; Rate Limiting supports advanced access limit functions. More and more features that were previously required to be configured on the server are now also configurable on Cloudflare. (At present, these two functions are still in the Beta stage, and users need to be authenticated to use them)
This feature can use Cloudflare as a load balancer without the need to configure load balancing on the hosting provider, official website introduction. Currently this feature is only available to Enterprise accounts and some eligible users. There are two ways to implement this function, as follows:
The load balancing function is implemented on Cloudflare’s edge servers, which are implemented through Layer 7 inversion. In fact, it is very similar to the original CDN function, but the back-to-source can be highly customized. The origin server can be configured with multiple regions (the location of the server needs to be manually set), and multiple servers can be configured in each region, and these servers can be set as a group. Point the domain name to this Group, and then Cloudflare’s edge server back-to-origin can automatically select the closest origin server based on the server’s region. This can effectively reduce the delay of the first byte, which will greatly help to improve the speed of dynamic resources.
In addition, Cloudflare also comes with a Health Check function, which can automatically change back to the source when the server goes down. Although the DNS method can also be used to switch after downtime, the DNS method will be affected by the cache time after all. If CDN switching is used, it can achieve second-level switching. One of my WordPress sites, tlo.xyz, uses this feature. The default is cross-regional load balancing between US East and Asia East, and one of the two will automatically switch if it fails. If all goes down, fallback to static pages on Google Cloud Storage. You can observe the Header of TLO-Hostname on https://tlo.xyz to determine which server responded.
Using this function does not need to open its CDN function, so the visitor directly connects to the origin site. The same way as above is also to configure a Group, but without enabling the CDN function, then Cloudflare will only function as a DNS server. It will automatically perform GeoDNS, return the IP address of the nearest server to the visitor, and also support the Health Check function, which will automatically switch the resolution when the server is down. Unlike other DNS resolvers, Cloudflare is truly intelligent. You only need to configure a Group, and Cloudflare will do the rest automatically, instead of manually choosing which region to resolve to which server. This function has been tested by me. During the current test period, the positioning function of GeoDNS is determined according to the CloudFlare server where the request finally arrives, that is to say, it is determined by the Anycast system. However, the vast majority of Chinese users will be directed to CloudFlare’s western US server by the operator, and will be parsed by the GeoDNS system to the result corresponding to the western US location. Therefore, this function is still very limited and not suitable for domestic use.
Cloudflare can finally limit the request rate of IP, this feature can filter CC attacks quite effectively, and has almost no impact on ordinary visitors (previously it was only possible through the I’m under attack feature, but this feature will make all users wait 5 seconds before load). It can configure different request rates according to different paths, and can achieve functions such as preventing brute force password cracking and preventing API interface abuse.
]]>The feature of this feature I developed is: when the commenter is replied, the email title is “Re: [article title]“, so that when a commenter’s message is replied by multiple people, it will be automatically sent to the local email client It is classified into one category; and the commenter can reply to the email directly after receiving the email, and will directly send an email to the sender of the reply (it will not be displayed on the website, and I can’t see it, it will be two people’s private chat).
The content of the email is concise, without extra useless things, and will not be recognized as Spam.
All code has been put on GitHub Gist.
<?phpfunction tlo_comment_mail_notify($comment_id) { global $comment_author; $comment = get_comment($comment_id); $parent_id = $comment->comment_parent ? $comment->comment_parent : ''; $spam_confirmed = $comment->comment_approved; $from = $comment->comment_author_email; $to = get_comment($parent_id)->comment_author_email; if (($parent_id != '') && ($spam_confirmed != 'spam') && $from != $to && $to != get_bloginfo('admin_email') ) { $blog_name = get_option('blogname'); $blog_url = site_url(); $post_url = get_permalink( $comment->comment_post_ID ); $comment_author = $comment->comment_author; $subject = 'Re: '.html_entity_decode(get_the_title($comment->comment_post_ID)); $headers[] = 'Reply-To: '.$comment_author.' <'.$comment->comment_author_email.'>'; $comment_parent = get_comment($parent_id); $comment_parent_date = tlo_get_comment_date( $comment_parent ); $comment_parent_time = tlo_get_comment_time( $comment_parent ); $message = <<<HTML<!DOCTYPE html><html lang="zh"> <head> <meta content="text/html; charset=utf-8" http-equiv="Content-Type"> <title>$blog_name</title> </head> <body> <style type="text/css"> img { max-width: 100%; height: auto; } </style> <div class="content"> <div> <p>$comment->comment_content</p> </div> </div> <div class="footer" style="margin-top: 10px"> <p style="color: #777; font-size: small"> — <br> Reply to this email to communicate with replier directly, or <a href="$post_url#comment-$comment_id">view it on $blog_name</a>. <br> You're receiving this email because of your comment got replied. </p> </div> <blockquote type="cite"> <div>On {$comment_parent_date}, {$comment_parent_time},$comment_parent->comment_author <<a href="mailto: $comment_parent->comment_author_email">$comment_parent->comment_author_email</a>> wrote:</div> <br> <div class="content"> <div> <p>$comment_parent->comment_content</p> </div> </div> </blockquote> </body></html>HTML; add_filter( 'wp_mail_content_type', 'tlo_mail_content_type' ); add_filter( 'wp_mail_from_name', 'tlo_mail_from_name' ); wp_mail( $to, $subject, $message, $headers ); }}add_action('tlo_comment_post_async', 'tlo_comment_mail_notify');function tlo_comment_mail_notify_async($comment_id) { wp_schedule_single_event( time(), 'tlo_comment_post_async', [$comment_id] );}add_action('comment_post', 'tlo_comment_mail_notify_async');// add_action('comment_post', 'tlo_comment_mail_notify');function tlo_mail_content_type() { return 'text/html';}function tlo_mail_from_name() { global $comment_author; return $comment_author;}function tlo_get_comment_time( $comment ) { $date = mysql2date(get_option('time_format'), $comment->comment_date, true); return apply_filters( 'tlo_get_comment_time', $date, $comment );}function tlo_get_comment_date( $comment ) { $date = mysql2date(get_option('date_format'), $comment->comment_date); return apply_filters( 'tlo_get_comment_date', $date, $comment );}
]]>Regarding dual certificates, only recommended for those who use independent IP, if there is no independent IP, then the SNI function needs to be enabled - however almost all browsers that support SNI function also support ECC certificates, so you can skip to After the upgrade steps, directly replace the ECC certificate of Let’s Encrypt without using the RSA certificate. I have more than one server, and it would be too troublesome to use Nginx compiled by myself, so I decided to use the method of adding software sources and upgrade through apt
, the method is as follows: First, you need to add the software sources of Nginx mainline:
$ sudo add-apt-repository ppa:nginx/stable # sudo apt install software-properties-common$ sudo apt update
Then remove the existing Nginx and install the new version:
$ sudo apt remove nginx nginx-common nginx-core$ sudo apt install nginx
During installation, you may be asked whether to replace the original default configuration file, just select N. The Nginx installed at this time already contains almost all necessary and common modules, such as but not limited to GeoIP Module, HTTP Substitutions Filter Module, and HTTP Echo Module. The OpenSSL version of Nginx I installed is 1.0.2g-fips, so it does not support CHACHA20. If you want to support CHACHA20, you can only use CloudFlare’s Patch and compile it yourself. After the installation is complete, you can verify the Nginx version:
$ nginx -Vnginx version: nginx/1.12.0built with OpenSSL 1.0.2g 1 Mar 2016TLS SNI support enabledconfigure arguments: --with-cc-opt='-g -O2 -fPIE -fstack-protector-strong -Wformat -Werror=format-security -fPIC -Wdate-time -D_FORTIFY_SOURCE=2' --with-ld-opt ='-Wl,-Bsymbolic-functions -fPIE -pie -Wl,-z,relro -Wl,-z,now -fPIC' --prefix=/usr/share/nginx --conf-path=/etc/nginx /nginx.conf --http-log-path=/var/log/nginx/access.log --error-log-path=/var/log/nginx/error.log --lock-path=/var/lock /nginx.lock --pid-path=/run/nginx.pid --modules-path=/usr/lib/nginx/modules --http-client-body-temp-path=/var/lib/nginx/body --http-fastcgi-temp-path=/var/lib/nginx/fastcgi --http-proxy-temp-path=/var/lib/nginx/proxy --http-scgi-temp-path=/var/lib /nginx/scgi --http-uwsgi-temp-path=/var/lib/nginx/uwsgi --with-debug --with-pcre-jit --with-http_ssl_module --with-http_stub_status_module --with-http_realip_module - -with-http_auth_request_module --with-http_v2_module --with-http_dav_module --with-http_slice_module --with-threads --with-http_addition_module --with-http_geoip_module=dynamic --with-http_gunzi p_module --with-http_gzip_static_module --with-http_image_filter_module=dynamic --with-http_sub_module --with-http_xslt_module=dynamic --with-stream=dynamic --with-stream_ssl_module --with-stream_ssl_preread_module --with-mail=dynamic - -with-mail_ssl_module --add-dynamic-module=/build/nginx-DYnRGx/nginx-1.12.0/debian/modules/nginx-auth-pam --add-dynamic-module=/build/nginx-DYnRGx/nginx -1.12.0/debian/modules/nginx-dav-ext-module --add-dynamic-module=/build/nginx-DYnRGx/nginx-1.12.0/debian/modules/nginx-echo --add-dynamic- module=/build/nginx-DYnRGx/nginx-1.12.0/debian/modules/nginx-upstream-fair --add-dynamic-module=/build/nginx-DYnRGx/nginx-1.12.0/debian/modules/ngx_http_substitutions_filter_module
At this point, your server has no Nginx HTTP/2 bugs. Since you use the latest version of Nginx, you can configure the ECDSA/RSA dual certificate.
When I upgraded, I encountered the problem that the GeoIP module could not be used. After research, it was found that the new version changed GeoIP to a dynamic call module method. Adding the following code to the http {}
configuration of Nginx solved it:
load_module "modules/ngx_http_geoip_module.so";
Let’s Encrypt provides a completely free and automatically issued certificate. A certificate can sign up to 100 domain names, and wildcarding is not supported. In order to configure dual certificates, you should first issue two certificates. The following takes acme.sh as an example, first create a directory (all the following cases use example.com
As an example, the actual use needs to be replaced by yourself):
$ mkdir -p /etc/letsencrypt$ mkdir -p /etc/letsencrypt/rsa$ mkdir -p /etc/letsencrypt/ecdsa
Then modify the Nginx configuration file to ensure that all those listening on port 80 have the location ^~ /.well-known/acme-challenge/
block. This configuration file is a case of forcing a jump to HTTPS. This is the configuration of the origin site :
server {listen 80 default_server;listen [::]:80 default_server;location ^~ /.well-known/acme-challenge/ {root /var/www/html;}location / {# Redirect all HTTP requests to HTTPS with a 301 Moved Permanently response.return 301 https://$host$request_uri;}}
Before issuing, make sure all domain names to be issued point to your own server! Then issue an RSA certificate (if you need a multi-domain certificate, you only need multiple -d
, the same below, but the saved file directory and certificate display name are the first domain name):
$ acme.sh –issue –reloadcmd “nginx -s reload” -w /var/www/html -d example.com –certhome /etc/letsencrypt/rsa
Then issue the ECDSA certificate:
$ acme.sh –issue –reloadcmd “nginx -s reload” -w /var/www/html -d example.com -k ec-256 –certhome /etc/letsencrypt/ecdsa
Uninstall the cron that comes with acme.sh
and reconfigure it yourself:
$ acme.sh –uninstallcronjob
$ vim /etc/cron.d/renew-letsencrypt
Enter the following, taking care to replace the path of acme.sh
with the absolute path of your installation:
15 02 * * * root /path/to/acme.sh --cron --certhome /etc/letsencrypt/rsa20 02 * * * root /path/to/acme.sh --cron --ecc --certhome /etc/letsencrypt/ecdsa
Then you’re done, and the certificate automatically renews.
Because Let’s Encrypt does not use wildcard domain names, it often encounters new subdomains. At this time, you need to add a domain name to the certificate. A certificate can add up to 100 domain names. The easiest way to add is as follows: First, modify the configuration file of the certificate. The configuration files of both certificates must be modified:
$ vim /etc/letsencrypt/rsa/example.com/example.com.conf$ vim /etc/letsencrypt/ecdsa/example.com\_ecc/example.com.conf
Find the Le_Alt
line and add the new domain names to the back (each domain name is separated by a comma, the total cannot exceed 100). Then to start re-issuing this certificate, you need to add -f
.
$ acme.sh --renew -d example.com --certhome /etc/letsencrypt/rsa -f$ acme.sh --renew -d example.com --ecc --certhome /etc/letsencrypt/ecdsa -f
It should be noted that the intermediate certificate and root certificate of the ECC certificate issued by Let’s Encrypt are currently not ECC certificates, which will be supported in the future. For details, see [Upcoming Features](https://letsencrypt.org/upcoming-features/ #ecdsa-intermediates).
First you need to generate a few keys:
$ openssl rand 48 > /etc/nginx/ticket.key$ openssl dhparam -out /etc/nginx/dhparam.pem 2048
Then add the following content into Nginx and put it under the http or server block. Although CHACHA20 is not supported, adding it in has no effect.
### SSL Settings##ssl_certificate /etc/letsencrypt/rsa/example.com/fullchain.cer;ssl_certificate_key /etc/letsencrypt/rsa/tlo.xyz/example.com.key;ssl_certificate /etc/letsencrypt/ecdsa/example.com_ecc/fullchain.cer;ssl_certificate_key /etc/letsencrypt/ecdsa/example.com_ecc/example.com.key;ssl_session_timeout 1d;ssl_session_cache shared:SSL:50m;ssl_session_tickets off;ssl_dhparam dhparam.pem;ssl_protocols TLSv1 TLSv1.1 TLSv1.2;ssl_prefer_server_ciphers on;ssl_ciphers 'ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE- RSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128- SHA:ECDHE-RSA-AES256-SHA384:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA:ECDHE-RSA-AES256-SHA:DHE-RSA-AES128-SHA256: DHE-RSA-AES128-SHA:DHE-RSA-AES256-SHA256:DHE-RSA-AES256-SHA:ECDHE-ECDSA-DES-CBC3-SHA:ECDHE-RSA-DES-CBC3-SHA:EDH-RSA-DES- CBC3-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256:AES256-SHA256:AES128-SHA:AES256-SHA:DES-CBC3-SHA:!DSS';ssl_stapling on;ssl_stapling_verify on;
Don’t forget to nginx -s reload
at the end, then go to SSL Labs to check the configuration, you can see that the old browsers use RSA certificates (I The server has a dedicated IP, so it can also be accessed without SNI support):
At this point, the ECDSA/RSA dual certificate configuration is complete, and you can view the certificate type in the browser:
]]>What is an IPv6-Only network Strictly speaking, only IPv6 addresses can be connected to an IPv6-Only network, which means that the DNS cache server must also be an IPv6 address and can only be connected to servers that support IPv6. If a domain name is to be resolved, the DNS server of the domain name itself and the root domain name to which it belongs must also support IPv6. In conclusion, there is no IPv4 in the whole process. So if you want to support IPv6-Only network, you need to work hard in almost all links.
In order to support IPv6, first of all, the root domain name you use must support IPv6. Fortunately, most root domain names support IPv6. From here, we can see that 98% of the root domain names already support IPv6. Generally, people still prefer to use third-party DNS instead of building their own, so it is necessary to use a DNS server that supports IPv6 for themselves. Fortunately, there are also many DNS servers that support IPv6, such as CloudFlare, OVH, Vultr, DNSimple, Rage4, etc. Most foreign DNS resolutions, oh, except for Route53; Baidu Cloud Acceleration, sDNS DNS resolution support has been seen in China, other Commonly used CloudXNS, DNSPod, Alibaba Cloud, etc. are not supported. To detect whether the root domain name or the DNS server used by a domain name supports IPv6, just execute the command and replace the <domain>
with the root domain name (such as com) or any first-level domain name (such as example.com):
$ dig -t AAAA `dig <domain> ns +short` +short
Then check whether the output IP is all IPv6 addresses. If nothing is output, the DNS server does not support IPv6. Example of proper IPv6 configuration:
If you want to build your own DNS server, you can refer to How to Build Your Own PowerDNS. If your root domain name does not support IPv6, then you can contact the root domain name to have them support it, or change the root domain name. If your first-level domain name does not support IPv6, contact the DNS resolver to ask them to support it, or simply change it.
That is to make your own server support IPv6, you need to contact your server provider and ask them to assign you an IPv6 address. If it still does not support IPv6, then you can use IPv6 Tunnel Broker, such as Hurricane Electric’s free Tunnel Broker, so Usually no better than the native IPv6 that the service provider gives you, but you can only use it until the service provider supports native IPv6. Tunnel Broker equivalent to a proxy built on the network layer (the third layer), requires the support of your server’s operating system, and the server must have a fixed IPv4 address. In order to use it you need to reconfigure the network card in the system (so shared hosting is out of the question), and then you can use IPv6 in the normal way, supporting IPv6 at almost zero cost. Note that host to Tunnel Broker transfers are in clear text, but as long as applications are using secure protocols (eg HTTPS, SMB, SFTP, SSH) this is not a problem.
It is a pity that my current two servers do not have native IPv6 available for the time being, so I can only use Tunnel Broker. After using it, I found that although it is free, the effect is also good: there seems to be no speed limit when downloading, my 100M unique The download speed of the shared bandwidth is 12Mbyte/s on native IPv4, which is almost the same speed on IPv6. There will still be some increase in the ping delay, mainly because there will be some delay between the Tunnel Broker server connecting to your server, it is equivalent to a proxy, so be sure to select the Tunnel Broker closest to your server (the one with the shortest delay) when creating. , instead of the user’s nearest Tunnel Broker.
Note that it is strongly not recommended for domestic hosts to use HE’s Tunnel Broker, because China’s connection to HE will bypass the United States, which will eventually add 500ms of ping delay to IPv6, more than 1000ms of TCP delay and several seconds of HTTPS delay. It is recommended that domestic hosts use plan two.
After the server supports IPv6, make sure that the newly set AAAA record on the domain name resolves to the IPv6 address.
The solutions just introduced are direct support, or use Tunnel Broker to build a proxy at the network layer. Of course, there is another way of proxying, which is based on the application layer (layer 7). What is built on the seventh layer is actually HTTP Proxy, but most places that provide HTTP Proxy functions can cache static content. , so that is CDN. The best solution is to use the free CloudFlare directly, then turn on the CDN function, which requires changing the DNS server (now you can use CNAME/IP access, and even configure IPv4 Back-to-origin, only use IPv6 CDN), but then all services including DNS can support IPv6, similar to Akamai, they can give you IPv6 support after proxy. However, if this scheme is used, the original function of collecting the user’s real IP will fail, including firewalls and web applications. But with just a few changes to the configuration, the guest IP can be collected again.
After doing these configurations, the website can be accessed by IPv6-Only network. However, this is only one step. Don’t forget that the CDN on the website and the mail service on the domain name do not yet support IPv6. Finally, let me list:
Note: The services listed in CDN and VPS all provide DNS that supports IPv6 and will not be listed here (the DNS services provided by Linode and DigitalOcean are actually also provided by CloudFlare)
*Rage4
*Gmail / G Suit (Gmail for Work)
When you have it configured, you can test your website on IPv6 Test.
Until now, it is still not necessary to support IPv6-Only network access in production, because there are very few IPv6-Only networks, generally compatible with IPv4, and many large websites do not support IPv6 at all. Apple’s requirement to support IPv6-Only is only that the program needs to use IPv6 communication internally, and the program cannot have an IPv4 address, which can be used by operators that only assign IPv6 addresses (however, these operators still support IPv4).
]]>The self-built DNS in this article refers to the authoritative DNS, that is, the DNS configured for your own domain name, not the cache DNS configured on the client.
First of all, let me talk about the fatal disadvantages of using a self-built DNS server:
Advantages of using a self-built DNS server:
In the end, I chose to use PowerDNS software (which is actually used by many service providers that provide DNS services). I installed its recently released version 4.0. Some features supported by this version:
Wait, that’s just what came to my mind. At the same time, PowerDNS supports many types of resolution records (at least the most I have seen so far): A, AAAA, AFSDB, ALIAS (also ANAME), CAA, CERT, CDNSKEY, CDS, CNAME, DNSKEY, DNAME, DS, HINFO, KEY, LOC, MX, NAPTR, NS, NSEC, NSEC3, NSEC3PARAM, OPENPGPKEY, PTR, RP, RRSIG, SOA, SPF, SSHFP, SRV, TKEY, TSIG, TLSA, TXT, URI, etc., and there are no columns that are not commonly used out, see all supported records. To be honest, there are some unpopular records that many resolvers do not support, but I need to use them, such as LOC, SSHFP and TLSA. Don’t know what this pile of records is for? See Wikipedia.
For detailed installation method see official documentation, you need to install pdns-server
first, and then install pdns-backend-$backend
. Backend is something you can choose by yourself. Commonly used ones are BIND
and Generic MySQL
. If you need GEODNS, you can use GEOIP
. All lists see here . If you want to control the background of the web version, it may be more convenient to use MySQL. Both BIND and GEOIP are fine if it is only controlled by file. I use the GEOIP version. The GEOIP version is highly scalable and uses YAML files, which is more flexible and elegant. This article will talk about the GEOIP version: Install on Ubuntu (available in the system software source):
$ sudo apt install pdns-server
$ sudo apt install pdns-backend-geoip
Then modify the configuration file:
$ rm /etc/powerdns/pdns.d/* # delete Example
Many features, such as CAA records, require the new version of PowerDNS. Please go to the official website to configure the software source.
Note that you should already have the MaxMind GeoIP Lite database, if not, install it as follows:
Important update⚠️: Since April 1, 2018, the GeoIP database in DAT format cannot be automatically downloaded by the software. Please [go to the official website to manually download the corresponding database](https://dev.maxmind.com/geoip/legacy/geolite /). It needs to be in Binary format.
Create a file /etc/GeoIP.conf
with:
# The following UserId and LicenseKey are required placeholders:UserId 999999LicenseKey 000000000000# Include one or more of the following ProductIds:# * GeoLite2-City - GeoLite 2 City# * GeoLite2-Country - GeoLite2 Country# * GeoLite-Legacy-IPv6-City - GeoLite Legacy IPv6 City# * GeoLite-Legacy-IPv6-Country - GeoLite Legacy IPv6 Country# * 506 - GeoLite Legacy Country# * 517 - GeoLite Legacy ASN# * 533 - GeoLite Legacy CityProductIds 506 GeoLite-Legacy-IPv6-CountryDatabaseDirectory /usr/share/GeoIP
Then install geoipupdate, execute sudo apt install geoipupdate && mkdir -p /usr/share/GeoIP && geoipupdate -v
, your database has been downloaded.
Create a file /etc/powerdns/pdns.d/geoip.conf
with:
launch=geoipgeoip-database-files=/usr/share/GeoIP/GeoLiteCountry.dat /usr/share/GeoIP/GeoIPv6.dat # Select IPv4 and IPv6 country modulesgeoip-database-cache=memorygeoip-zones-file=/share/zone.yaml # The location of your YAML configuration file, anywheregeoip-dnssec-keydir=/etc/powerdns/key
Create that YAML file, and then start writing Zone, here is an example (IPv6 is not required, all IPs should be filled in with external IPs, this article takes the country as an example, the order of the juxtaposed content does not matter):
# @see: https://doc.powerdns.com/md/authoritative/backend-geoip/domains:- domain: example.com ttl: 300 # Default TTL duration records:##### Default NS ns1.example.com: - a: # Your server's first IPv4 address content: 10.0.0.1 ttl: 86400 - aaaa: # Your server's first IPv6 address content: ::1 ttl: 86400 ns2.example.com: # Second IPv4 address of your server (if not, same as above) - a: content: 10.0.0.2 ttl: 86400 - aaaa: # Second IPv6 address of your server (if not, same as above) content: ::2 ttl: 86400##### Root domain example.com: # records under the root domain name - soa: content: ns1.example.com.admin.example.com.1 86400 3600 604800 10800 ttl: 7200 - ns: content: ns1.example.com. ttl: 86400 - ns: content: ns2.example.com. ttl: 86400 - mx: content: 100 mx1.example.com. # weight [space] hostname ttl: 7200 - mx: content: 100 mx2.example.com. ttl: 7200 - mx: content: 100 mx3.example.com. ttl: 7200 - a: 103.41.133.70 # If you want to use the default TTL, you don't need to distinguish between the content and ttl fields - aaaa: 2001:470:fa6b::1##### Servers list Your server list beijing-server.example.com: &beijing - a: 10.0.1.1 - aaaa: ::1:1 newyork-server.example.com: &newyork - a: 10.0.2.1 - aaaa: ::2:1 japan-server.example.com: &japan - a: 10.0.3.1 - aaaa: ::3:1 london-server.example.com: &uk - a: 10.0.4.1 - aaaa: ::4:1 france-server.example.com: &france - a: 10.0.5.1 - aaaa: ::5:1##### GEODNS partition resolution # @see: https://php.net/manual/en/function.geoip-continent-code-by-name.php # @see: https://en.wikipedia.org/wiki/ISO\_3166-1\_alpha-3 # unknown also is default # %co.%cn.geo.example.com # default unknown.unknown.geo.example.com: *newyork # resolves to US by default # continent unknown.as.geo.example.com: *japan # Asia to Japan unknown.oc.geo.example.com: *japan # Oceania resolves to Japan unknown.eu.geo.example.com: *france # Europe resolves to France unknown.af.geo.example.com: *france # Africa to France # nation chn.as.geo.example.com: *beijing # China parsing Beijing gbr.eu.geo.example.com: *uk # UK resolves to UK services: #GEODNS www.example.com: [ '%co.%cn.geo.example.com', 'unknown.%cn.geo.example.com', 'unknown.unknown.geo.example.com']
This configuration is equivalent to parsing www.example.com to the partition. Due to some problems in this parsing at present, GEODNS cannot be set under the root domain name and subdomain at the same time. I have submitted feedback on this bug. (https://github .com/PowerDNS/pdns/issues/4276). If you want to only set the parsing precision at the continent level, just write %cn.geo.example.com one less level. If you need to be accurate to the city, you can write one more level, but you need to add the GeoIP city database to the configuration file. However, the city version of the free city database is not accurate, and you also need to purchase a commercial database, which is an additional cost.
Go to your domain name registrar, enter the background to modify the settings, and add a subdomain name server record to the domain name, as shown in the figure:
Since the NS to be set is under your own server, you must register your NS server IP address with the upper-level domain name (such as .com) on the domain name registrar, so that the upper-level domain name can resolve the IP of the NS and build your own DNS For example, there is a NS of its own under icann.org:
$ dig icann.org ns +shorta.iana-servers.net.b.iana-servers.net.c.iana-servers.net.ns.icann.org.
Then look at its upper-level domain name org:
$ dig org ns +shorta2.org.afilias-nst.info.b0.org.afilias-nst.org.d0.org.afilias-nst.org.c0.org.afilias-nst.info.a0.org.afilias-nst.info.b2.org.afilias-nst.org.
Find any server and query the authoritative record (I don’t need +short ):
$ dig @a0.org.afilias-nst.info icann.org ns;; QUESTION SECTION:;icann.org.INNS;; AUTHORITY SECTION:icann.org.86400INNSc.iana-servers.net.icann.org.86400INNSa.iana-servers.net.icann.org.86400INNSns.icann.org.icann.org.86400INNSb.iana-servers.net.;; ADDITIONAL SECTION:ns.icann.org.86400INA199.4.138.53ns.icann.org.86400INAAAA2001:500:89::53
It can be seen that the NS server in this org has already returned the records of ns.icann.org. This is why you need to fill in the IP address in the domain name registrar. However you’d better return the same NS and the same IP on the DNS server under your domain name as well. Finally, don’t forget to change the NS record of the domain name.
In the YAML I just used, I have actually used the advanced YAML writing method, that is, &variable
sets variables, *variable
uses variables, which is very similar to the LINK record under CloudXNS, for example, under CloudXNS you can write:
www.example.com 600 IN A 10.0.0.1www.example.com 600 IN A 10.0.0.2www.example.com 600 IN AAAA ::1www.example.com 600 IN AAAA ::2sub.example.com 600 IN LINK www.example.com
Then in your YAML record you can write:
www.example.com: &www - a: 10.0.0.1 - a: 10.0.0.2 - aaaa: ::1 - aaaa: ::2sub.example.com: *www
This is a high-level way of writing YAML without any additional support.
For details, you can Reference Documentation, run the following command:
$ mkdir /etc/powerdns/key$ pdnsutil secure-zone example.com$ pdnsutil show-zone example.com
The result returned by the last command is the record you need to set at the domain name registrar. It is not recommended to set all of them. Just set ECDSAP256SHA256 - SHA256 digest. Finally, check the settings online Test address 1 Test address 2, there may be a few days of caching time. My inspection results
You can write this in YAML, to make it easier for you to debug:
"*.ip.example.com": - TXT: content: "IP%af: %ip, Continent: %cn, Country: %co, ASn: %as, Region: %re, Organisation: %na, City: %ci" ttl: 0
These variables can be used as your GEODNS criteria, as well as checking your GEOIP database. Then, check the posture correctly:
$ random=`head -200 /dev/urandom md5` ; dig ${random}.ip.example txt +short"IPv4: 42.83.200.23, Continent: as, Country: chn, ASn: unknown, Region: unknown, Organisation: unknown, City: unknown"
The IP address is the DNS cache server address (if you enable EDNS Client Subnet and the cache server supports it, then it is your own IP, but if you use 8.8.8.8, you will see that the last digit of your IP is 0), if you If you specify to check from your own server locally, then return your own IP address directly. Since I only have the country database installed, everything is Unknown except for the continent and country.
In general, it is a DNS resolution server of a Master and a Slave, but in this case, there may be problems with DNSSEC, so I set up two Master servers, automatically synchronize records, and set same DNSSEC Private Key , nothing seems to go wrong (after all, all records, including SOA, are exactly the same), my server’s current configuration
$ dig @a.gtld-servers.net guozeyu.com;; QUESTION SECTION:;guozeyu.com.INA;; AUTHORITY SECTION:guozeyu.com.172800INNSa.geo.ns.tloxygen.net.guozeyu.com.172800INNSc.geo.ns.tloxygen.net.;; ADDITIONAL SECTION:a.geo.ns.tloxygen.net.172800INA198.251.90.65a.geo.ns.tloxygen.net.172800INAAAA2605:6400:10:6a9::2c.geo.ns.tloxygen.net.172800INA104.196.241.116c.geo.ns.tloxygen.net.172800INAAAA2605:6400:20:b5e::2
Among them are two IPv4 and two IPv6, of which a.geo.ns.tloxygen.net. is an IP address using Anycast technology, which is provided by three servers behind it. c.geo.ns.tloxygen.net. belongs to the host of another service provider, so there is a backup after it hangs, which is more stable.
A distributed DNS like mine is actually a combination of Unicast and Anycast. One problem is that connecting to one of them in one place will be faster, but the other will be slower. Only with a DNS cache server that supports asynchronous query or with GeoIP, it is possible to connect to the fastest DNS authoritative server, otherwise it is a random connection, and if a server hangs, then the corresponding IP of the server is obsolete. Anycast is an IP corresponding to multiple hosts, but I have no conditions to use it. This may cost more for individuals. Either you have an AS number and let the host provider give you access, or your host provider provides cross-regional access. Load Balancing IP. My VPS is in two different hosting companies, and I don’t have AS, so I can’t use Anycast. I think Anycast should be used for DNS services, because the IP corresponding to the DNS server cannot be GEODNS (because this is the root domain name that is resolved for you). After using Anycast, the fastest connection rate can basically be guaranteed, and a server has an IP address. Still usable. Additionally, DNS must forward both TCP and UDP port 53.
How to realize automatic switchover when downtime? The process to achieve this is: Monitoring service finds downtime -> Sends a downtime request to the server -> The server processes the downtime, resolves to an alternate IP/pauses parsing monitoring service finds that the service is normal -> Sends a normal service request to the server -> The server processes the service normally, and the recovery analysis can create two YAML files, one is used by default, and the other is used when the server is down. When the monitoring service finds that the server is down, reload another YAML file, and then this is Downtime mode.
]]>.gitlab-ci.yml
file to achieve automatic build and real-time synchronization of mirroring to GitHub.There are actually many software/services that can host on their own servers, such as GitHub Enterprise, Bitbucket Server. However, we still recommend the GitLab Community Edition, which is completely open source, free, and maintained by the community. There are no restrictions, but it has fewer features than the Enterprise Edition that should not be used in the first place.
The specific installation method see the document, the currently officially recommended system environment is Ubuntu 16.04 LTS, which is very easy to install, and the entire web environment will be configured. For more configuration after installation, please see documentation. If there is more than one web program running on your host, you need to modify the existing web software, and you need to refer to the official Nginx configuration documentation. I use sub_filter
in my code to replace the default title for better SEO and more branding. Then in order to achieve better use effect, you should also configure the SMTP outgoing server, I am using AWS SES; then I also need a receiving server that supports IMAP to achieve Reply by email, I am using Gmail, the limit of receiving emails There are fewer restrictions than sending emails~ The specific setting methods of these are available in the official documents. After installation, registration is allowed by default. If you do not want outsiders to register, you need to go directly to the web background to disable it. If you want to open registration, then it is best to think about what new registered users can do, for example, like me: only allow new users to create Issues and Snippets, then set the Default projects limit to 0
in the web background, and edit The configuration file in the background prohibits new users from creating groups. It is also recommended to enable reCAPTCHA and Akismet in the web background to prevent malicious registration and malicious issuance of Issues. Since registration is allowed, it is also recommended to use OmniAuth to support third-party OAuth login.
GitLab Runner is very powerful, but it is not built-in. It can be extremely convenient to implement very useful functions such as automatic deployment. After installing and configuring the Runner, add a file named .gitlab-ci.yml
in the project root directory. Taking the master branch as an example, in order to achieve every commit to master, the file is deployed to /var/gitlab/ myapp
, then the file content should look like this:
pages:stage: deployscript:- mkdir -p /var/gitlab/myapp- git --work-tree=/var/gitlab/myapp checkout -fonly:- master
Note that you need to create the /var/gitlab
folder first, and set the user group of this folder to gitlab-runner:gitlab-runner
$ sudo chown -R gitlab-runner:gitlab-runner /var/gitlab
The core part of .gitlab-ci.yml
is script:
, the scripts here are all executed by the user gitlab-runner
, you can modify it according to your needs, and several examples are also given later. Then commit, go to the settings page to activate the Runner of this project. It is recommended to set Builds to git clone
instead of git fetch
in the settings, because the latter often has strange problems, and the speed bottleneck of the former is mainly network transmission.
The official documentation strongly discourages deploying Runners on the same host, but this statement is not correct. This is not officially recommended because some builds take a long time and take up a lot of CPU and memory resources. But if the build script you’re executing doesn’t do this, it’s okay to install on the same host.
These kinds of deployments are the ones I use more often, and you can use them as examples to make various deployments according to your own needs. The following web deployment methods do not consume much system resources, and because of the use of nice
, they will not block other tasks and can be deployed on the same host.
Modify the git checkout
line of the previous .gitlab-ci.yml
file and replace it with:
jekyll build --incremental -d /var/gitlab/myapp
You can also add the following code to .gitlab-ci.yml
to automatically check the compilation errors of all PHP files. The compiled files will not be displayed, only the compilation errors will be displayed:
if find . -type f -name "*.php" -exec nice php -l {} \; grep -v "No syntax errors"; then false; else echo "No syntax errors"; fi
The following procedure requires root access to the host, or adding sudo
before each command line. First, you need to give the gitlab-runner
user a separate SSH Key:
$ ssh-keygen -f /home/gitlab-runner/.ssh/id_rsa
Then, create /home/gitlab-runner/.ssh/known_hosts
with:
github.com ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEAq2A7hRGmdnm9tUDbO9IDSwBK6TbQa + PXYPCPy6rbTrTtw7PHkccKrpp0yVhp5HdEIcKr6pLlVDBfOLX9QUsyCOV0wzfjIJNlGEYsdlLJizHhbn2mUjvSAHQqZETYP81eFzLQNnPHt4EVVUh7VfDESU84KezmD5QlWpXLmvU31 / yMf + Se8xhHTvKSCZIFImWwoG6mbUoWf9nzpIoaSjB + weqqUUmpaaasXVal72J + UX2B + 2RPW3RcT0eOzQgqlJL3RKrTJvdsjE3JEAvGq3lGHSZXy28G3skua2SmVi / w4yCE6gbODqnTWlg7 + wC604ydGXA8VJiS5ap43JXiUFFAaQ ==
After that, get the contents of /home/gitlab-runner/.ssh/id_rsa.pub
file, Add this SSH Key on GitHub. Since you are using the root account, don’t forget to modify the user group after you are done:
$ sudo chown -R gitlab-runner:gitlab-runner /home/gitlab-runner/.ssh
Then, also through .gitlab-ci.yml
To achieve automatic synchronization:
git push --force --mirror git@github.com:[Organization]/[Project].git
Just change [Organization]
and [Project]
to your own names.
All files are stored in their own server, which is more secure. They have the highest authority and will not encounter the situation that the project is deleted. During deployment, the delay is extremely low, and the reliability is also high. You will not encounter the dilemma that your own server has no problem but the third-party service is down and cannot be deployed. It can be deployed to the nearest server or internal server according to the situation. For example, the server of GitHub is located on the east coast of the United States. The connection in Asia is not fast, and the country is not stable. The most important thing is that if you already have a VPS or something, and you have a lot of free time, then you can get private storage for free, but [pay attention to performance requirements](http://docs.gitlab.com/ ce/install/requirements.html#hardware-requirements), do not enable if there is not enough free time. Since it can configure real-time synchronization mirroring to GitHub, GitLab has so many functions that GitHub does not have. In fact, GitLab can be fully used as the main version control tool, and GitHub just saves a mirror for backup.
]]>WordPress is a dynamic system. If the cache is not configured, the server needs to read the database and generate page content for each request. For hosts with different performance, this may take 20ms~1000ms or even slower. If the cache can be configured correctly, it can significantly speed up and reduce the computing resources of the host.
Using a plugin to configure caching is the easiest way. Here I recommend WP Super Cache, which is a caching plugin produced by WordPress.com. As far as page caching is concerned, it has very comprehensive functions and supports many Various caching modes, including mod_rewrite
, if you use Nginx, then you can use my configuration file, you can skip PHP and serve static files directly , the corresponding speed of the page has been significantly improved. Also, it is necessary to return the correct Cache-Control for browsers, especially CSS and JS files.
Object caching is more flexible and widely used than page caching, but certainly not as fast as page caching. The APCu caching system is recommended here. The installation method on Ubuntu/Debian is as follows:
$ apt install php-apcu
Then restart the Web server, Install APCu Object Cache Backend . Redis, Memcached, etc. are all caches of this type, but APCu is the easiest to configure and fast.
For example, my website uses North America East Coast (main) and Asia VPS, the main server is configured with Nginx, PHP and MySQL; the Asian server is only configured with Nginx. Configure the cache on these servers, and use lsyncd to synchronize the cached content. Nginx checks the cache every time you visit, and only proxy if there is no cache, which can greatly reduce the delay of the first page.
By using the site-wide CDN, you can avoid the problem of configuring cache on your own server, you can also add HTTPS, HTTP/2 and other functions to the server, and at the same time, it can filter illegal traffic and defend against DDOS (provided that your IP is not exposed, Or you set up a whitelist). In addition, the use of CDN can significantly improve the loading speed of the website, benefiting visitors.
CloudFlare is free to use, you need to change the DNS service provider before using CloudFlare, and then you can use it without additional configuration. But it only caches CSS, JS and multimedia files, not HTML pages, that is to say, users will still return to the original server every time they visit, the speed of the page itself will not be significantly improved, and configuring caching on the original server is also necessary.
KeyCDN is a pay-per-use CDN provider, using the plugin [Full Site Cache for KeyCDN](https://wordpress. org/plugins/full-site-cache-kc/) can be easily configured, this plugin will automatically refresh the cache. Compared with CloudFlare, KeyCDN can cache HTML pages, which greatly reduces the pressure on the origin server. At the same time, when refreshing the cache, it can be refreshed by tag to avoid unnecessary refreshes. When the website has a large number of visits, the use of KeyCDN can significantly improve the speed, and the cache hit rate will also be greatly improved.
You can configure another domain name, use a CDN on that domain name, and then implement a partial CDN by rewriting the page address through a plugin. The WP Super Cache mentioned above can configure CDN, or use CDN Enabler to implement some CDN functions. As for the choice of CDN, it doesn’t matter, as long as it supports Origin Pull (that is, requesting back-to-origin).
The easiest way to improve the performance of the server itself is to use more memory and more core CPUs to speed up significantly. In addition to this, it is also important to improve the performance of the application on the server:
The processing speed of PHP scripts is a major bottleneck of WordPress. With the latest version of PHP, higher performance can be obtained, for example [7.0 is 3 times faster than 5.6](https://www.zend.com/en/resources /php7_infographic).
Second, using fewer plugins reduces the amount of scripts PHP needs to execute, because WordPress loads all plugins on every page load. A small number of plugins (under 10) have little impact on WordPress speed, and of course it depends on the plugin itself.
The database is one of the bottlenecks of WordPress performance, and optimizing on the database can improve the speed to a certain extent. In general, if you use WordPress correctly, you don’t need to clean up the database. However, there may be some plugins that may create too many useless tables in the database, and the response speed of the server will be greatly reduced (about 1 to 3 times). It is recommended to use [WP-Optimize](https:// wordpress.org/plugins/wp-optimize/) to clean up. It is not too many articles, but it will not affect the loading speed (the speed below 10,000 articles is actually acceptable, but why do you write so many articles? Quality is more important than quantity).
Images occupy a large part of the size of the web page and are also related to the user experience.
Compressing images can not only improve the image loading speed during access, but also reduce server bandwidth. It is recommended to use the free [EWWW Image Optimizer] (https://wordpress.org/plugins/ewww-image-optimizer/), which can be used on the server. pictures are processed. If your server performance is limited, you can use Optimus to do it online, the free version has very limited features.
Loading different images for different devices, such as images loaded on mobile phones, can be much smaller than images loaded on computers with retina screens. Using the [srcset technology mentioned on this site] (https://www.guozeyu.com/2015/08/using-srcset/) can be the easiest to implement this function, as long as your theme supports it (the latest official The default theme already supports it), if the theme itself does not support it, it can also be implemented through plugins.
WordPress has some built-in services that you probably don’t need, and disabling them can reduce the amount of resources a page needs to load and what the server needs to do.
WordPress supports Emoji but will therefore introduce extra CSS into the page, use this script to disable it (if you don’t need it).
Google Font is very slow to load in China, and the page will remain blank until the loading is complete. You can specifically install the Disable Google Fonts plugin, or disable it in the settings in the Autoptimize plugin mentioned below.
Go to Settings => Discussions to disable the “Try to notify blogs linked in articles” and “Allow other blogs to send link notifications (pingbacks and trackbacks) to new articles” features (if you don’t need them). This feature does not affect page load time.
Reducing the number of requests can also improve loading speed in some cases, and reducing page size can shorten the time it takes to download a page. Autoptimize is recommended, it can minify CSS, JS and HTML, and combine CSS and JS to reduce the number of requests.
However, if your blog has the HTTP/2 protocol enabled, there is no need to reduce the number of requests, but in order to enable HTTP/2, HTTPS must be used, so it is hard to say whether it will be faster or slower in the end. However, the use of HTTPS is still strongly recommended. For security, what is the sacrifice of speed.
Doing the above points can effectively speed up the speed. My website has achieved the above points. The timeline of this website is as follows under the condition of no-cache Wi-Fi in China:
]]>Note: CloudXNS does not support TCP.
The best domestic free DNS, as a DNS service, its functions are also complete. There are quite a few CloudXNS servers in China, but they do not use Anycast technology, so it is useless to talk about any foreign servers.
CloudXNS uses GeoDNS to partition and resolve the domain names of its NS servers, domestic ones are resolved to domestic servers, and foreign ones are resolved to foreign servers. However, four of the Glue Records of its root domain name are still four domestic servers. In actual analysis, Glue Records will be used first, so no foreign servers will be used.
Why not add these foreign servers to Glue Record too? Because Glue Record does not support GeoDNS, the resolver will randomly select the server, so if this is the case, some domestic requests will also go to the US server, which will slow down.
The DNS services that are quite popular in foreign countries have all the necessary functions. Servers are located all over the world and use Anycast to guarantee the lowest latency.
There is a considerable amount of free DNS abroad, with servers all over the world, using Anycast to ensure the lowest latency.
Inexpensive, offers a 100% SLA, and uses Anycast to guarantee the lowest latency.
Both DNSSEC and partition resolution are supported, and Anycast is used to ensure the lowest latency.
For Alibaba Cloud’s products, all analysis servers use Alibaba Cloud’s computer room, which guarantees speed. Charges are based on features used, not resolution. There are many servers in China, but they do not use Anycast technology, so the speed abroad is very poor. Speed benchmark:
ns1.alidns.com. 106.11.141.111
The Aliyun paid version is different from the free version. The paid version includes overseas lines, but because there is no Anycast, the final line is randomly selected. Therefore, compared with the free version, the paid version has a slightly shorter response time abroad. The domestic analysis time is slightly longer. Note: Alibaba Cloud DNS does not support TCP.
The H.264 + AAC format encapsulated in MP4 is the most compatible format at present, and can be played on most devices, but it is not an open source format, and can be used on some clients or servers that only use open source software. This format cannot be used. A typical example is that Wikimedia Commons, the media resource site behind Wikipedia, does not use the MP4 format, resulting in videos on Wikipedia that cannot be played at all on some browsers. For most cases, however, using the MP4-encapsulated H.264 + AAC format is sufficient. Secondly, the video needs to be displayed, which is not difficult in the 21st century. If you want to display on the web, the good news is: all major browsers follow a unified specification (HTML5), as long as you also use this specification, then you can embed video on mainstream browsers (Although they follow the same set of specifications, the supported encodings are different. It is like saying that the user’s mailboxes are all unified specifications now, and you can put the books you want to send into these mailboxes. But your books are in Chinese written, so people who do not understand Chinese will not be able to understand the content of the book.).
A version of the same video that is streamed on the Internet is of a different quality than it is played locally. In order to make the video play smoothly on the Internet, the quality of the video will be reduced. Many video websites will also prepare multiple versions of a video with different picture quality, so that the better picture quality version can be played in different network environments.
In fact, the purpose of writing this article is to talk about the gap between Chinese video websites and foreign countries. Some are unavoidable factors, but some are what these sites should have done, but didn’t. The screen resolution of foreign YouTube videos can reach 7680×4320@60FPS, and it is not dare to call it “ultra-clear”. However, in China, the resolution of 1280×720@30FPS or even lower is generally called “ultra-clear”. This is nearly 100 times lower than the former (in terms of the amount of information in pixels). Moreover, the picture quality under the same resolution is also very different, the lower the picture quality, the more money you can save (many video websites in mainland China are very rich, but why save money? And know~).
Higher video resolution means better results on higher resolution monitors. There are already many 4K and 8K TVs and monitors, and more and more cameras/phones can record 4K and 8K video. These high-quality content are moving towards the low-end consumer market, so video websites still need to support them.
But what I said before, I don’t expect domestic video sites to do it, because they still want to make more money. But why use a bug that is full of bugs, and the official has stopped maintenance, you need to install additional plug-ins to play videos? I’m talking about Flash (of course, there are a few websites that don’t use it anymore), just playing a video, using HTML5 is enough, there is no need for such a complicated thing. If you want them to not use Flash, you must first make Flash completely disappear from the client (it would be better if browsers can actively block Flash, such as Safari), and gradually, they will find a large number of users churn (this thing is happened), and then one day it will finally be possible to use the new technology. In foreign countries, it is much ahead of China. Since many years ago, mainstream video sites have stopped using Flash by default.
There is actually a lot more to be said about the streaming of video on the Internet, and this article is just a rough talk.
]]>There are no new advanced features, all of which have been there before, or are minor software upgrades.
Of course, the computer can do it later.
With the video creative filter function, you can shoot short videos with special filter effects added. This is due to the continuous evolution of image processors. Including the original miniature landscape effect short film, there are 5 filter effects to choose from: Memories, Dreams, Old Movies, and Black and White.
It is to take multiple photos and automatically compose a video. As far as I understand, it should be electronic shutter every time, which can greatly save shutter life and battery life. And because the automatic synthesis is 1080p video, it is also very space-saving, this function should support manual mode.
A time-lapse movie is a form of movie in which still images shot at regular intervals are stitched together and fast-forwarded. You can give full play to the image quality advantages of EOS 80D still images, and get high-quality MOV full HD movies (about 30p, 25p). It is suitable for shooting the flow of clouds or the movement of stars in the sky, etc. Compared with ordinary short films, the changes of the subject are concentrated in a short time, and the effect is more impressive. The shooting interval can be set from 1 second to 99 hours, 59 minutes and 59 seconds, and the number of shots can be set from 2 to 3600 shots.
But is this kidding me? Why are the composite clips in Full HD instead of 4K? It has to be said that the above two functions are only software upgrades, and other EOS cameras may also have these functions in the form of software upgrades in the future. The other functions listed below are hardware upgrades.
These features are already available in other EOS cameras, but they were once features that the 70D didn’t have.
The new 45-point full-cross focus has been improved a lot compared to the 70D’s 19-point focus, but from a digital point of view, it does not seem to be as good as the 65-point full-cross focus on the originally released 7D Mark II, but the actual Is that so?
The full 45-point cross-type AF sensor is an AF system that can correspond to F2.8, F4, and F5.6 beams, and some AF points are compatible with F8 beams. When using a lens with a maximum aperture of F5.6 and a brighter large aperture, focusing can be performed using up to 45 cross-type AF points. Depending on the maximum aperture of the lens used, the nature of the AF point will also change. When using different lenses or setting different aspect ratios, the number of available focusing points and focusing methods will be different.
After some research, I found an amazing new feature: 27 AF points (9 of which are cross focus) even at F8, which is the second most AF point in the EOS series at F8. (second only to the 1DX Mark II), so even if the telephoto lens is equipped with an extender, the aperture becomes very small, and it is convenient to focus.
Install the extender EF 1.4X III on the EF 100-400mm f/4.5-5.6L IS II USM, and there are 27 focus points that can be used when the maximum aperture at the telephoto end is reduced to F8.
Compare the focusing capabilities of different focusing systems under F8:
wow! The 80D is amazing, but which lenses support its Group G autofocus?
Um, I feel cheated. How many focusing points do the unsupported lenses have under F8?
One, only one, not even a secondary focus point, but at least it’s better than the 70D. In addition, the new 45-point full-cross focus should be inferior to the 65-point full-cross focus in all aspects.
The EOS 80D is equipped with a new CMOS image sensor independently developed and manufactured by Canon, with an effective pixel of about 24.2 million. The CMOS semiconductor manufacturing project introduces a new refinement process, and uses a gapless microlens, which shortens the distance between the microlens and the photodiode and improves the light gathering rate. And the pixel structure is optimized to improve the signal-to-noise ratio. Linked with the DIGIC 6 digital image processor, high-resolution can be obtained even in low-light scenes, and fine performance can be achieved.
This pixel is already considered high in APS-C cameras. However, this time, the upper limit of ISO has not been increased. If the pixel upgrade is not done, the upper limit of ISO can be doubled to 51200. However, this CMOS is exactly the same as the 760D. Why do you say they are the same? Because from here it seems to be the same:
However, there are two differences:
It is worth noting that the focusing system, Hybrid CMOS AF III has only appeared on the 750/760D, and its performance is excellent, maybe it is the limited Dual Pixel CMOS AF. See the test on YouTube for details
Canon is equipped with an optical viewfinder with a field of view of about 100% for the first time on the EOS 2-digit model, and this function is still very important. And the 80D viewfinder has some new elements, which I call a “(pseudo) electronic level”
Why is it fake? Because of its limited capabilities:
The original version is like this, do you see the level above it? biaxial.
When I saw this mode, I laughed, once again confirming that the 80D will not be a professional or quasi-professional camera.
Some representative shooting scenes are collected in the special scene mode, and the camera can make corresponding shooting settings according to the specific scenes. There are 10 optional scenes, including handheld night scene, candlelight, and children. The operation during shooting is simple. Just like using a smartphone, slide your finger on the LCD monitor and touch the selection icon, and the camera can find the appropriate settings according to the characteristics of the subject and shooting environment to take good photos.
If you are a non-professional and choosing between the 7D Mark II and the 80D, I recommend the 80D, which has a touchscreen and Wi-Fi, which is really convenient. The 7D Mark II is better than the 80D in focus, continuous shooting, and durability, but it is not necessarily much stronger than it in other aspects, and the pixels are lower. If you are choosing between the 5D Mark III, 6D and 80D, and you have a low budget and need to record video, I would recommend the 80D, which has better focus and shutter speed than the first two. As for full-frame or APS-C, that’s another question worth discussing. For more information on the 80D, please refer to Canon’s official press release
]]>Lossless and lossy compression of JPEG and PNG images, support for compressing existing images, can speed up the loading of images for visitors. Also supports progressive loading of JPEG. Under normal circumstances, when the network speed is low, the picture is loaded bit by bit from top to bottom, but with progressive loading, the low-resolution version of the picture is loaded first, and then gradually becomes clearer. **I have become a paid user of this plugin and can further lossy compress PNG and JPEG images, reducing server CPU usage (otherwise each uploading image will consume a lot of CPU). However, I recently used Cloudflare’s Polish function, and I no longer install similar software on the server.
This plugin can automatically merge CSS and JS and compress them, which is very convenient for . And some themes will have a lot of inline CSS, when merge CSS is turned on, these inline CSS will be automatically added to the file. My some configuration in this plugin, to share:
functions.js,player,mediaelement,sparkline,toolbar,admin,akismet,themes
admin,mediaelement,wordfence,piwik,toolbar,dashicons,crayon-syntax-highlighter/themes,crayon-syntax-highlighter/fonts
<head>
: can load JS ahead of time##AMP
Plugin author: Automattic, optimized for Google, showing super fast AMP pages in search results. Details: Building Ultra-Fast Mobile Pages with AMP
Two authenticators (need to cooperate with the mobile client of the same name, or 1Password Professional Edition). Have you ever heard of the Heart Bleed exploit? With two certifications, there is no need to be afraid even if there is another Heart Bleed vulnerability. Principle: Scan the QR code on the mobile phone generator to save the key, and then the generator generates a 6-digit number according to the time variable. These 6 digits have an expiration date and can only be used once. Security is even greater than SMS verification code
After changing the IP, you need to log in again. Even using HTTP can avoid the risk of cookie leakage to a certain extent.
WordPress #1 Security Defense plugin, which can limit the number of password attempts to prevent brute force cracking, add WAF function to your WordPress, and view real-time access. By looking at its log, I found out that my WordPress has been being maliciously cracked by brute force, sometimes thousands of times a day, almost every day. Fortunately, as long as you try 3 wrong passwords, the IP will be blocked directly, which is why there are not many Failed Logins. This kind of defense is based on the application layer, and it is not something that cloud-accelerated and anti-DDOS services such as Cloudflare can defend against.
This plugin enables WordPress to display auto-highlighted source code, with a variety of themes to choose from and support for multiple programming languages.
This is a simplified version of Jetpack, without the bunch of useless features of Jetpack, no need to log in to wordpress.com, full-featured, and very easy to use.
This plugin can automatically generate sitemap.xml
and robots.txt
of the website, you can directly submit sitemap.xml
to Baidu and Google, so that search engines will not miss every article on your website article too. This plugin supports WordPress multisite without any configuration and is recommended to be enabled on the entire network.
##WP-Piwik
This plugin enables your entire website to have statistical functions, supports WordPress multi-site, and is recommended to be enabled on the entire network. About Piwik with WordPress, please refer to this article: Use Piwik with WordPress to build a powerful statistical system.
This plugin enables your WordPress to generate a Podcast Feed, allowing you to have a podcast platform, and you can also submit this feed directly to places like iTunes.
This plugin allows you to insert the Exif information of the picture into the description of the picture. The disadvantage is that it needs to be added manually after uploading the picture, but it can be added in batches, which is quite convenient. After inserting the Exif information into the description of the picture, and then inserting the picture into the article, the Exif information can be displayed in the article.
]]>Go to the Geolocation page in Settings and select Download a GeoIP database in the lower left corner of the page. After downloading, you can use GeoIP (Php), however this is slow. I recommend you to use GeoIP (PECL), if you are using cPanel, then you can enable GeoIP module directly in Select PHP Version page, then go to php.ini
to add or Edit this line:
geoip.custom_directory = /Matomo/misc
Then OK, select GeoIP (PECL) and that’s it!
First, you need to install Matomo Analytics under WordPress (perfectly supports multi-site), and then go to settings. If you have both WordPress and Matomo installed on one site, choose the PHP API, otherwise choose the HTTP API. Fill in the Matomo path and Auth Token (which can be found in Matomo’s backend), then open Enable Tracking and select the default tracking. In Show Statistics, you can choose which statistics are displayed in which places, which is very convenient.
When you choose Show per post stats, you can see the visitor information of this article on the edit page of each article, which is very nice.
First, in order to correctly identify the user IP, you need to add the following line of code to config/config.ini.php
.
proxy_client_headers[] = "HTTP_CF_CONNECTING_IP"
If your CloudFlare has IP Geolocation enabled, then you don’t actually need to enable GeoIP on the host. Just create a .htaccess
file in the root directory and add the following lines of code:
RewriteEngine OnRewriteBase /RewriteRule ^ - [E=GEOIP_COUNTRY_CODE:%{HTTP:CF-IPCountry}]
Then go to the geographic location option of Matomo statistics. Now, your geographic location option can choose the third GeoIP (Apache), which is the fastest.
]]>In fact, in the manual of TL-PA500, it is stated that its network cable port is a 10/100Mbps port, which means that it cannot reach a speed of more than 100M at all… In fact, I do not believe this. The Power Cat can really hit 500M, so I tested it by copying a 3 GB file to the AirPort Time Capsule with another AirPort base station (wireless) connected. And copy it when directly connected to the AirPort Time Capsule (wireless), compare the time, the distance between the two base stations is about 15m. In particular, the other base station (AirPort Extreme) does not support 802.11ac, while the AirPort Time Capsule does. Both support the 5GHz band, and both use it when copying. After going through the power cat, it took 8 minutes to transfer this 3GB file, which was about 50M, not reaching the limit of 802.11n at 5GHz. Direct transfer, consumes 2 minutes 20 seconds, which is about 170M. Therefore, after passing through the power cat, the transmission speed is limited to about 50M (but it may be the same speed when using the network cable), this speed is still acceptable, because the general network speed in China has not reached this limit. Deploying the Internet is still entirely feasible. However, in the end, the dark line was taken again, and the family LAN finally reached 1000M, which was enough.
]]>Usually, you don’t need to upload files to the CDN, you just need to resolve the domain name to the corresponding server. When a user visits your website, it is equivalent to visiting this server. This server will check whether there is a cache, and if so, it will return it directly to you from the cache; if not, it will download it from the original server and return it to the user. After that, when users (again at this location) access, they will be downloaded directly from the files cached on the CDN, and the speed will be greatly improved. The more users visiting your site, the more noticeable this speed increase will be. If most of the requests are not cached, there is little speed gain. Other than that, it’s best to put the CDN’s domain name under a cookie-free root domain. The recommendation about CDN has been summarized in new article
KeyCDN is also very convenient to use, the nodes cover the world, the price is extremely low, and you pay on demand. It also has real-time logs. At the same time, it supports binding custom domain names, free custom SSL services, and provides free SSL certificates (Let’s Encrypt). KeyCDN only supports acceleration of static resources,
In order to use CloudFlare, you need to change your domain’s NS resolution provider, that’s all, no other settings are required. CloudFlare defaults to dynamic resource acceleration. At the same time, a free SSL certificate (Comodo Positive SSL Wildcard) is provided, and CloudFlare has a free version.
]]>.html
file. First of all, there is a misunderstanding here. Some people think that static web pages cannot be easily updated. In fact, static web pages can be easily updated. With the help of static web page generators, updating them is not complicated. When it needs to update an article, it needs to regenerate the home page and the article, which is usually done in less than a minute. What if the blog were to use dynamic web pages? It is certainly possible to do so, and there are many mature software, such as WordPress, that run in a PHP environment and require a MySQL database. Every time a web page is accessed, the server needs to read the content of the database (or read from the cache), and then process it into HTML with a certain style and return it to the user. Of course, dynamic web pages can realize all the functions of static web pages, and of course there are more functions, such as image uploading and regular publishing. Since dynamic web pages can realize all the functions of static web pages, what are the advantages of static web pages?After a lot of DDOS attack tests, I found that static web pages are much harder to attack than dynamic web pages, even one to several orders of magnitude harder. Every time you visit a dynamic web page, you need to parse PHP, read the database or cache, and require much more computation than static web pages. Therefore, the DDOS attack is basically ineffective, and the speed is not even slow.
A static web page does not need a database at all, it is just a file itself, so there is no problem such as database injection at all. There are even fewer bugs.
CDN on static web pages is extremely simple, and site-wide caching is enough. Every page can be cached directly, and all caches are cleared when the page is updated. After you have a CDN, you are not afraid of DDOS. It is not a problem to attack the other party or even GB, and people just need to pay a few cents for the traffic fee.
A static web page doesn’t need a database at all, it just needs a server that can upload files and link externally, such as Amazon S3 can put it. Most servers can deploy it.
Basically, as long as you know HTML, CSS and JavaScript, you will write static web pages, and then use some static web page generators to generate all the pages directly.
If a static page generator is used, there must be a delay, and the amount of delay depends on the speed at which the page is generated. There are two generation methods to choose from, one is to generate locally, the other is to generate on the server.
These functions can generally only be implemented in dynamic web pages, because they all require a database. However, static web pages can choose third-party services. For example, you can use Disqus for comments, Google Analytics for statistics, or You can build your own dynamic server to do these things and install Piwik.
There is no problem with blogging with static web pages, and it is also recommended by me (slap in the face, I have actually switched to WordPress now). It’s convenient and simple, and there’s nothing wrong with it.
]]>https://
, which means that this protocol is used. Apple’s latest mobile operating system, iOS 9, not only brings many new features, but also improves the security of the entire system, as [iOS developer resources](https://developer.apple.com/library/ prerelease/ios/releasenotes/General/WhatsNewIniOS/Articles/iOS9.html) saysIf you’re developing a new app, you should use HTTPS exclusively. If you have an existing app, you should use HTTPS as much as you can right now, and create a plan for migrating the rest of your app as soon as possible . In addition, your communication through higher-level APIs needs to be encrypted using TLS version 1.2 with forward secrecy. If you try to make a connection that doesn’t follow this requirement, an error is thrown.
That’s right, starting with iOS 9, non-HTTPS requests will be gradually disabled! Even existing programs will still work in iOS 9 without HTTPS. But I believe that in the near future, all programs will use HTTPS, and HTTP will be completely eliminated. So why use HTTPS? What about using HTTPS in those cases?
HTTPS can encrypt data transmission, preventing man-in-the-middle interception or modification. Ability to encrypt user information and website content. For example, if you use what the public calls “insecure free Wi-Fi”, if all the web pages a user visits are HTTPS, then this Wi-Fi has no effect on the user. In other words, the “free Wi-Fi is not safe” reported by the media is pure rumor and has no truth. When HTTPS and HSTS are enabled, free Wi-Fi cannot intercept any information such as user passwords at all, and users can make payments and other operations with peace of mind. Obviously, CCTV 315 is lying to everyone without any professional knowledge and explanation that “free Wi-Fi is not safe”, which is completely intimidating the audience. The reason why all the photos in WeChat Moments can be obtained is because WeChat Moments uploads are in plain text, which is obviously a problem of WeChat itself, obviously not all software has such a problem. With the release of iOS 9 and the enforcement of HTTPS, this type of problem will no longer exist. Second, using HTTPS is not only to prevent information theft, but also to prevent information from being modified in the middle. For example, China Unicom and China Mobile will modify the content of the website and put their own advertisements to let users upgrade their products, but these advertisements are not prepared by the website owner, and the website owner does not know it in advance. Although they have no ethical bottom line in doing so, we only need to use HTTPS, and there is nothing these operators can do. The “404 error page optimization” of Xiaomi routers also uses the same principle to tamper with non-HTTPS pages and provide users with their own advertisements for profit. It is hijacking in itself, no exaggeration. In addition, some users also found that even on normal pages, there are advertisements added by Xiaomi through hijacking web page code injection. But when HTTPS becomes commonplace, all of this will disappear. However, before HTTPS became popular, some website owners who did not support HTTPS could only endure being hijacked by operators and routers.
In my opinion, it is necessary for all web pages and programs to use HTTPS in full and mandatory to avoid the above situation. Including personal websites, HTTPS should also be fully enabled to prevent the loss of readers due to tampered and implanted advertisements. Using HTTPS doesn’t add much cost and can make pages faster. The SPDY protocol can minimize network delay, improve network speed, and optimize the user’s network experience. However, the SPDY protocol only supports HTTPS. With the current trend, more and more webmasters will actively or forced to use HTTPS, and HTTPS is about to become mainstream. China is the country with the least popularity of HTTPS, but with Baidu’s full-site HTTPS and UPYUN’s support for HTTPS with custom domain names, it will promote the development of HTTPS in the entire industry.
If a web page does not use HTTPS, it means that the content on the page, the keywords you are searching for, and even the username and password are not encrypted, and “middlemen” can read and tamper with these unencrypted content. For example, the free Wi-Fi you connect to, the operator, etc. may modify the content of the web page. These middlemen will add their ads on web pages, modify 404 page styles, and read pictures in your WeChat Moments without your permission. But when HTTPS is enabled on this web page, the data on the page will be encrypted. Under normal circumstances, the man-in-the-middle cannot obtain your data, but if the man-in-the-middle replaces the original HTTPS, the attack is still possible, so the HTTPS site A certificate must be included for verification.
Since HTTPS sites must include a certificate for authentication, it is possible to verify that the server is licensed by the domain owner (to prevent connections to man-in-the-middle servers). Usually browsers will verify this certificate, and if the certificate is wrong it will warn the user that there is a problem with the website they are visiting. If the user’s DNS service is polluted, it is possible to forge the identity to another host (IP address), which is even more terrible than a man-in-the-middle attack and can do more. When a user accesses using HTTPS, the browser will verify the certificate of the website, and if the certificate is wrong, it will warn or even fail to access. But if the user access is using HTTP, then the server will not be authenticated. Full authentication is only possible if the website enforces HTTPS (that is, disables HTTP links) and has HSTS enabled. Once HSTS is enabled, all pages of this domain name can only be accessed via HTTPS within the set validity period.
For pages where all resources under the site use the HTTPS protocol, many browsers will have encryption prompts to inform users that the site is encrypted, making the entire site higher. However, I would like to remind users that not all pages using HTTPS are safe, and any website can easily apply for an SSL certificate, so it is still necessary to identify the domain name itself. However, it is more trustworthy for HTTPS sites that directly display their company name, because this kind of certificate needs to be verified by paper proof materials. Below are the encryption prompts for accessing some HTTPS sites using Chrome for Mac. Chrome’s encryption prompt menu is divided into two parts, the first part is verification, and the latter part is encryption, which can usually be divided into the following four types:
The first and second cases represent that a sufficiently secure encryption method is used (but the second one does not provide any Certificate Transparency information), but the signature level of the certificate is different, which has nothing to do with the encryption method and the security of the verification. In any case, it can be guaranteed that the certificate is not forged.
The third case is containing unsafe resources, the appearance of the website may be changed, but the HTML text itself is reliable.
The fourth case is a certificate signed with SHA-1. Since SHA-1 is not secure enough, that is to say, the security of verification is not enough. Since the cost of forging such a certificate is getting lower and lower, it may not be secure. Encryption for such a site is still adequate.
The fifth case represents a possible man-in-the-middle attack (since no Certificate Transparency information is provided, and SHA-1 is used).
The sixth case means that the website uses a certificate issued by an untrusted root certificate (or the certificate does not contain the current domain name). Encryption tips for Chrome In any case, I do not recommend you to use SHA-1 signed certificates, it is worth noting that the latest version of Safari can also choose SHA-1 signed certificates are not trusted anymore, SHA-1 is about to become obsolete.
Google’s crawling of HTTPS sites is very friendly, and officials say that using HTTPS will also improve rankings. Google also supports HTTPS sites using the SNI protocol. Baidu has recently supported the crawling of HTTPS sites and will increase the ranking accordingly. After testing, not all certificates are supported, but the SNI protocol is also supported.
It has to be said that using HTTPS does increase the delay. This delay is mainly reflected in the loading of the first page, and entering the next page will be much faster. In my test, there will be 2~5 times more delay after enabling HTTPS. If there are resources in other domains, then these resources will also have so much more delay.
The SNI protocol has been widely used, but there are still some devices that are not compatible, such as Windows XP’s IE8 and previous browsers are not compatible. As of now, the usage rate of browsers supporting SNI in China has reached 95%.
Some search engines are fine with HTTPS, but others don’t support it. However, more and more search engines are gradually supporting HTTPS, I don’t think there is any need to worry.
A certificate is required to use HTTPS. Now that there are more and more free SSL certificates, the cost of implementing HTTPS in general is getting lower and lower. However, as the cost of certificate issuance is getting lower and lower, the function of HTTPS as verification will become smaller, but the function of encryption will still exist.
HTTPS can only prevent tampering to a certain extent, but it cannot resist some powerful [network blocking](https://zh.wikipedia.org/wiki/%E9%98%B2%E7%81%AB%E9%95 %BF%E5%9F%8E). At present, some “rogue” browsers do not verify certificates, which greatly weakens security. If the certificate is not verified, there is still encryption, but if the DNS is polluted, a man-in-the-middle attack is still feasible. If the DNS is polluted, but HTTPS is still required, the site will not be accessible. So DNS is also in great need of encryption. If DNSSEC or “HTTPS DNS” (that is, using the HTTPS protocol to resolve DNS) can be supported as soon as possible, then HTTPS as a verification will not be so important, and the day when there is no man-in-the-middle attack at all is just around the corner (but it is still impossible without network blocking). Perhaps future browsers can only access HTTPS sites, just like iOS 9 vigorously promotes HTTPS, which can greatly improve security and resist man-in-the-middle attacks.
The following is the configuration in .htaccess
in Apache
RewriteEngine onRewriteCond %{HTTP:X-Forwarded-Proto}=httpRewriteRule ^ https://%{HTTP\_HOST}%{REQUEST\_URI} \[L,R=301\] # disable HTTP protocol
HSTS (HTTP Strict Transport Security, HTTP Strict Transport Security) is a way to make browsers enforce HTTPS. When a user visits an HTTPS site, the server returns a Header, informing the browser that HTTPS must be enforced under this domain name, within the validity period , the browser will only use HTTPS to access this domain name. The following is the configuration in .htaccess
in Apache.
Header set Strict-Transport-Security "max-age=315360000; preload; includeSubDomains" env=HTTPS
For example, when visiting http://tlo.xyz
for the first time, the browser will be redirected to https://tlo.xyz
by 301, and then it will receive this Header. Within 10 years, all files under tlo.xyz will be All domain names will only use HTTPS, including the second-level domain name ze3kr.tlo.xyz
. But this is not over yet. If the website has been hijacked by HTTPS when the browser is accessed for the first time, then it is meaningless to do so, so you need to include the preload
parameter after starting HSTS, and then go to Submit, pay attention to the requirements. After you submit it, you will be able to see your domain name in the source code of major browsers after a period of time.
Go to SSL Server Test to give your server’s SSL configuration a score. oh, the difference
Activité Pop can automatically detect walking, running, swimming and sleeping. You don’t need to make any settings at all, it can correctly identify the current type of exercise, including sleep records. It can record exercise duration, walking/running distance, calories burned during exercise, total sleep duration, number of wake ups, deep sleep duration, and light sleep duration. It’s all automatic.
Activité Pop has a vibrating alarm function, not only that, but it also has a smart wake-up function that wakes you up in advance when it is about to the set time and it detects that you have woken up or entered a light sleep.
Up to 8 months of battery life on standard coin cell batteries.
A 4K screen usually refers to a display that is four times the resolution of 1080p (that is, twice as wide and twice as tall). The difference between the 4K screen and 1080p resolution is equivalent to the difference between the iPhone 4 and its previous generation. Anyone who has used it knows that the difference between the iPhone 4’s Retina screen and the non-Retina screen is very obvious. Especially the displayed text becomes clearer and sharper. However, since the display is much larger than that of a mobile phone, if a display larger than 19 inches is to achieve the Retina effect, it will require a super high resolution of 4K.
After actual use, the 4K screen did not disappoint me, especially the display of text and pictures was excellent.
This is a mid-range 4K monitor, capable of drawing, photo processing, and more. The 16:9 widescreen, 60 Hz refresh rate and minimum 6 ms latency make it a great gaming screen too, with a wide range of features.
If you use an HDMI port on this monitor, you can only go up to 3840 x 2160 @ 30 Hz, only computers with DisplayPort ports are likely able to support 3840 x 2160 @ 60 Hz. A MiniDP to DisplayPort cable is included with the device. 3840 x 2160 @ 30 Hz is supported on the latest models of all series of Macs, and 3840 x 2160 @ 60 Hz on the latest models of MacBook Pros with Retina screens, Mac Pros and 5K iMacs (requires MST enabled for this display) Function). In order to achieve the Retina effect, the display needs to be enlarged. It is well supported under Mac, and almost all third-party apps can support Retina. Windows 8.1 and newer versions on the PC also support enlarged display, but unfortunately not many third-party apps support Retina, and these apps will have a jagged effect, which is not so clear. Especially if you’re using a Retina Mac, the display doesn’t feel too different from the original display as an extended screen. But if you use a Retina Mac with a 1080p screen, you’ll noticeably blur the text on the 1080p screen.
This monitor can be used as a USB Hub. The specific use method is to connect a USB output port on the computer to the monitor. There are 4 USB 3.0 output ports on the monitor to connect any USB-based devices, so the computer can establish with these devices. connect. Such as: hard disk, mobile phone, mouse, keyboard, network card, etc., it is equivalent to connecting directly to the computer, which can save a USB Hub. A USB 3.0 - Type A cable is included with the device to the USB input on the monitor. I haven’t encountered any problems with my Mac in actual use, and I don’t have to install any drivers. Of course, the port on the monitor can also charge any USB device. What’s even more amazing is that when the computer is off or the monitor is not connected to the computer’s USB port, and the monitor is turned on or in standby mode, the 4 USB ports on the monitor are still available. Ability to charge the device. (By default, no power is supplied to the device when the monitor is in standby, you can enable this function in the settings)
Parameter name | Corresponding value |
---|---|
Screen Diagonal Size | 60.47 cm (23.80”) |
Aspect Ratio | Widescreen (16:9) |
Panel Type | IPS |
Best Resolution | 3840 x 2160 @ 60 Hz |
Brightness | Up to 300 cd/㎡ |
Response time | 6 ms minimum |
Colors | 1.07 billion colors |
I think the pixel density of a 4K screen around 24 inches is basically close to that of a Mac with a Retina screen, and I would prefer a 24-inch 4K monitor. A 27” would require 5K or higher. At present, there are very few interfaces capable of transmitting 5K resolution, so a 24-inch 4K is the most suitable choice.
A fixed-width photo here refers to a photo whose style attribute width is set to a fixed number of px. As more and more high device pixel ratio (referred to as device pixel ratio, the same below) displays appear, websites need higher pixel photos to fit these displays. For example, there is a photo with a display width of 200px, which occupies 200 physical pixels (that is, the actual occupied pixels, the same below); it actually occupies 400 physical pixels on a @2x display; it actually occupies 600 physical pixels on a @3x display; similarly, on a @4x display That’s 800 physical pixels. If this photo was only available in 200px, it would look blurry on @2x~@4x monitors. If only the 800px version is available, it will load unnecessary content on @1x~@3x devices, which means longer unnecessary time (especially on mobile phones). At this point, you need to use the responsive image method. The easiest and most efficient way is to use srcset to solve it. For example, 4 photos with widths of 200, 400, 600, and 800 pixels are now provided. Through the srcset attribute, these 4 photos can be displayed on monitors with a device pixel ratio of @1x~@4x, so that on each monitor It can display the best photo for it. The file names of these four pictures are 200px.png, 400px.png, 600px.png, 800px.png respectively. To sum up, the physical pixels occupied by the photo to determine the width are only related to the device pixel ratio, so only the x identifier of the srcset attribute is needed.
<img src="200px.png" srcset="400px.png 2x, 600px.png 3x, 800px.png 4x">
Adding the srcset attribute in this way, the browser will load different photos according to the pixel ratio of its own device. Browsers that do not support the srcset attribute will load the photo 200px.png by default. This method is backward compatible.
The indeterminate width photo here refers to the photo whose style attribute width is set to a certain percentage. If it is a photo with an uncertain width (or the width of the image is uncertain on the mobile phone, the width is determined on the computer, etc.), it can also be solved by srcset. Since the pixels cannot be determined, it is difficult to achieve that the photo itself and the physical pixels occupied by the photo are the same, because the user’s window size is indeterminate. If you only prepare 5 photos, the width is 400~3200px, and the file names are 400px.png, 800px.png, 1600px, 2400px, 3200px. Usually, if there is no photo corresponding to the physical pixels occupied by the image, an image that is slightly larger than the physical pixels occupied by the image will be loaded. For example, if the photo occupies 500 physical pixels of the screen, then you should choose to load 800px.png. Loading slightly larger photos in this way can still achieve the best display effect, while saving network resources to the greatest extent. To sum up, the physical pixels occupied by a photo with an uncertain width are related to the device pixel ratio and the occupied width. However, after using the w identifier of the srcset attribute, you only need to specify the width, and the browser will automatically calculate the pixel ratio according to the device pixel ratio. Choose the best picture. If the image occupies 100% of the entire window size (ie 100vw width, the same below), then you only need to set srcset in this way to achieve the effect. Putting aside the concept of device pixel ratio, the w identifier of this new srcset attribute allows the browser to adapt itself to the photo to load.
<img src="800px.png" srcset="400px.png 400w, 800px.png 800w, 1600px.png 1600w, 2400px.png 2400w, 3200px.png 3200w" alt="" />
However, there is still a small problem at this time. The browser is relatively stupid. It will think that the width of the image is always 100% of the width of the entire window. What if the image does not occupy 100% of the entire window and there are borders on both sides? In this case, you need to use the size attribute. The width of the image is specified in the sizes attribute. The percentage unit cannot be used, but units such as vw and px can be used (100vw is the width of the entire window), or calc operation can be used, and media queries are supported. To ensure that the width calculated in sizes is always the width of the screen occupied by the image, the rest only needs to be done by the browser. Here are some typical case examples:
If the borders on both sides of the picture (referring to the length from the edge of the picture to the edge of the window) always occupy a certain percentage of the screen, then the picture will always have a percentage relative to the entire window. For example, the borders on both sides of the picture are always 10% of the entire window, then the size property can be set as follows:
sizes="80vw"
If the image is already under an element whose width is 70% of the entire window, and there are still 10% borders on both sides of the image within the element, then the size property can be set as follows:
sizes="50vw"
For example, the borders on both sides of the image are always 10px, then the size property can be set as follows:
sizes="calc( 100vw - 10px )"
If the image is already under an element whose width is 70% of the entire window, and the borders on both sides of the image inside the element are always 10p, then the size property can be set as follows:
sizes="calc( 70vw - 10px )"
For example, the borders on both sides of the picture are always 10px, but the maximum size of the picture can only be 1000px, then the size property can be set as follows:
sizes="(min-width: 1020px) 1000px, calc( 100vw - 10px )"
Where (min-width: 1020px) 1000px
means that when the screen pixel width is greater than 1020px, the image width is 1000px.
Take my website as an example here. The frame of the picture on my website is 1.875rem. The picture itself is under an element. The default width of this element is 100% of the entire window, but when the screen width is greater than 40.063rem, this The element occupies 50% of the entire window, and this element has a maximum size of 500px. Now put the size attribute together with the srcset attribute with the w identifier, and it’s perfectly solved.
<img src="800px.png" sizes="(min-width: 1000px) calc( 500px - 1.875rem ), (min-width: 40.063rem) calc( 50vw - 1.875rem ), calc( 100vw - 1.875rem ) " srcset="400px.png 400w, 800px.png 800w, 1600px.png 1600w, 2400px.png 2400w, 3200px.png 3200w" alt="" />
Then you can test it in the browser to achieve the effect of adapting to all screen sizes and pixel ratios of all devices.
In general, the x identifier of the srcset attribute is supported better than the w identifier, because the w identifier is a new attribute. But the good news is that Chrome 38+, Firefox 38+, Safari 9+, Opera 30+, Android 5.0+, iOS 9+ (yes, IE/Edge, Windows Phone are not supported as of the date of writing) are all supported The w identifier has been removed, so I think it is now a good time to use the w identifier of the srcset attribute for responsive images. And this method is perfectly backward compatible, even if it is not supported. You can also check on CanIUse.com for a complete compatibility list for the secset attribute.
]]>Compatible with most cameras that can install SD cards For example, Canon 5D Mark III, 7D Mark II and other models do not have Wi-Fi function, but they are all CF/ SD dual card slot, you can save photos into two cards at the same time. Considering the speed and capacity of Eyefi Mobi cards are not as good as CF cards. So I set the JPEG format to be stored in the Eyefi Mobi card, and the RAW format to be stored in the CF card. The JPEG format is more suitable for mobile phone sharing to social networks, and the RAW format is suitable for post-editing, which is very convenient. I store both JPEG and RAW to both cards for every next photo I take in my Canon camera (set to “simultaneous recording” in the camera), even with the 7D Mark II’s 10fps burst There is no problem at all. It can be seen that this card performs well as a secondary card on a professional camera, and can quickly share the photos just taken from the mobile phone. In addition, the Eyefi Mobi card is also suitable for entry-level models with only SD cards, these cameras do not have fast burst shooting, this card can meet daily needs, and it is also suitable for entry-level users who like to share photos.
Cameras compatible with the Eyefi linkage function can directly configure the settings of the Eyefi card. At the same time, it can also ensure that the camera can continue to transfer files after the camera is turned off. The transfer progress can be displayed on the camera’s display screen, or the operation of the Eyefi can be seen directly in the machine’s menu. In the case of a Canon camera, after installing the Eyefi card, there is a new menu option in the camera settings:
When the Eyefi Mobi card is used for the first time, you only need to download a software on the mobile phone, enter the ID of the card in the software, after the configuration is completed, put the card into the camera and take a photo, you will find that the mobile phone immediately This picture appeared, very convenient to use. The transfer status of this card can be viewed in the camera, as well as the progress. Usually this card only works wirelessly when there are new photos. When all the pictures are transferred, the wireless function is automatically turned off, and the battery life of the camera is almost unchanged.
The domain name usually corresponds to an IP address, and the browser can communicate with the server only after knowing this IP address. In order to resolve the domain name, the client will send a request for the domain name to the DNS resolution server, and the IP address of the domain name can be queried. DNS resolution has a “TTL time to live”, which is the cache time. This content will be cached on the DNS resolution server, router and client, and stored according to the cache time. The DNS resolution server is usually provided by the operator, and if there is a cache, the resolution speed is quite fast. (The current DNS resolution is transmitted in very insecure plaintext, and any middleman can destroy or modify the resolution result)
After having the IP address, the browser establishes a TCP connection with the server and makes a request. The request will include the domain name of the website that needs to be accessed (so that IP can provide different content to multiple domain names), request method, URL path, acceptable data, compression type and encoding method, cookies and browser information (which can determine your whether the website is accessed on a mobile phone).
The browser starts to download the page itself, and after downloading the page itself, it starts to download other content that the page depends on, including CSS, JavaScript, images, etc. Depending on the priority, they may be downloaded before or after rendering.
The browser will wait for the most important dependencies (that is, CSS and JavaScript in the head
section, which will be highlighted later) to be loaded, before rendering. Because rendering depends on these contents, CSS is also what we often call style sheets. These style sheets are very important, such as this website of Baidu:
Without CSS style sheets, the layout of the entire web page is “out of control” and the page doesn’t load in the expected way, which is a terrible experience. A web page without a style sheet and a web page with a style sheet are usually two completely different things. However, JavaScript is usually not related to layout and can be loaded later without affecting page styles.
Every content in the web page has to go through such a process in order to be obtained by the client. Only when the page is rendered can the user see the content of the page. Before that, the page is blank. Be aware that if a page is blank for 3 seconds, more than half of users will log out. Mobile-first considers the user experience on high-latency mobile networks, but when it comes to desktop, you can also enjoy the speed boost brought by limited mobile. In order to increase the speed, there are several common methods:
If JavaScript is placed at the end, it can be loaded after the page is rendered, greatly reducing the time when the page is blank. This is the simplest and the most obvious improvement.
Using CDN means putting static files on the CDN server as much as possible. A CDN contains many nodes. When users download files on CDN, they will be downloaded from the node with the nearest physical location or the same operator. If there is a cache, it will be directly passed to the user. If there is no cache, it will be downloaded from the nearest data center and cached, and then passed to the user. My approach is to put all the images, videos, CSS stylesheets and JavaScript on the site on the CDN, the site itself is not on the CDN. Doing so can greatly reduce load times at a small cost. Whether it is a personal blog or an industry website, it is necessary to use a CDN. If you don’t have the money to buy your own CDN, it’s okay, you can use the CDN of common open source CSS and JavaScript for free. The official website address is: cdnjs.com.
Cache images, CSS style sheets and JavaScript settings. I’m used to setting a fairly long time period (a week, a year, etc.), and some people adjust the cache period to a smaller value in order to be able to modify it frequently. I change the file directory directly when I modify the file, so that the file can always be kept up-to-date. Taking advantage of the cache can greatly speed up the user’s second visit.
For website speed, we should not only focus on the first startup speed, but also use the cache to improve the next visit speed. And you should pay attention to user experience while improving speed.
]]>When you activate your Mac’s display, your phone will receive a notification, swipe right to tap “Unlock” and press your fingerprint, and your computer is wirelessly unlocked. In addition, you can also wirelessly lock the computer on the phone, or play songs, etc., which is very convenient. In addition, the MacID computer client can even detect whether you are away by sound, and then automatically lock. If your computer has a Muti-Touch trackpad, you can also set specific gestures to unlock your Mac. For security, you can set the gesture to unlock only when the phone is connected to the computer. And MacID not only allows you to easily unlock the computer, but also transfers the contents of the clipboard between the phone and the computer in both directions. For people who often use two devices at the same time, this feature is very handy.
MacID also supports plug-ins in the Today View, and it is extremely convenient to unlock your Mac anytime, anywhere.
One mobile phone can wirelessly control many functions such as unlocking of Macs, and the list is simple and beautiful.
You can also control your Mac wirelessly with your Apple Watch.
]]>This software has its own built-in HDR, which is completely different from the system’s HDR function. Its HDR is comparable to the effect synthesized on PhotoShop, which is very colorful. And there are a variety of HDR formats to choose from, and you can do manual HDR and adjust the brightness of HDR yourself to meet all needs. (HDR function needs to be purchased separately)
It supports manual focus, manual exposure. Not only that, it can also adjust shutter speed, ISO sensitivity, white balance, etc., all of which can be displayed in real time to meet the needs of most photos. Its shutter speed can be set up to 0.5 seconds, which can easily deal with night scenes (usually, the system’s camera is auto-exposure, but it does not take long shutter speeds of 0.5 seconds, so there is usually a lot of noise). Besides, it can adjust the image format, it supports to save JPEG format (adjustable quality), uncompressed TIFF and lossless compressed TIFF to get lossless photos. It can manually adjust the color temperature of the picture in real time to achieve the most perfect white balance.
Typically, the system’s camera never provides long exposures, which can cause blurry images due to hand shake. However, ProCamera can allow 0.5~1 second exposure time, and if you hold it correctly, the picture will not be blurred. Long exposure can make the picture brighter, or have less noise at the same brightness, which greatly improves the quality of the night scene.
This software has rich post-processing functions, and can even adjust parameters such as exposure curves, which is very easy to operate.
In addition, it has many filters, and it can even adjust the parameters of each filter, which is very powerful.
This software has a powerful timer function, which is very suitable for advanced Selfie or interval shooting. Through post-production, timelapse video of up to 8 megapixels can be captured. This function is similar to the timer on a DSLR and is very useful. You don’t even need to buy another software for the timer.
With the purchase of this software, you can remotely control the ProCamera to take pictures on your Apple Watch, and you can also browse the photos you have taken to get the most out of your Apple Watch. If you already own an Apple Watch and are looking for a third-party software that lets you take pictures remotely with your Apple Watch, this software is exactly what you’re looking for.
Although this software has many functions, you will not feel that this software is too “heavy” when using it. Its interface is very simple and friendly, and it is suitable for professional and non-professional users at the same time. This software can complete all the picture creation.
]]>That all changed a lot, though, until the release of the iPhone. Because the browser on iPhone - Safari is a real and very cutting edge web browser and it also supports HTML5 standard. It comes with almost the same browser as a computer - so much so that it supports CSS3 and JavaScript. But all this is displayed on a device that is only 320 pixels wide. However when a desktop browser is reduced to 320 pixels in width, it becomes very different, when he visits a web page that is not optimized for it, it will scale it (the width will be scaled to 1/3), Take Baidu News as an example, if there is no adaptation, the computer version of the website will be displayed, so that the words on the page will be small, but the user can solve it by zooming, so there is no problem. However, Baidu News made a version specifically for the iPhone, like this:
It adapts better to touchscreens and doesn’t have the problem of small fonts or scaling. But please note that this page is no longer the page just now, which means that Baidu needs to redesign a webpage for the iPhone version, which is very costly. However, with the help of CSS, there is no need to prepare two web pages to adapt to both computer and mobile phones. They load the same page, but the applied style sheets are different. For example, without a mobile-optimized Wikipedia navigation page, Safari on the iPhone will zoom in and out, and the layout will be the same as when visiting on a computer:
However, with the addition of CSS style sheets (and of course the viewport
in the head
), the page will be displayed perfectly on the mobile phone, like the image below:
A web page like the Wikipedia navigation page that is designed according to the computer layout, and then adds CSS to make it mobile-friendly, is desktop-first rather than mobile-first. Mobile first is just the opposite. The page is designed according to the mobile phone, and then it is adapted to the computer. The computer and the mobile phone also access the same page, which is mobile first. Since mobile-first websites are designed based on mobile phones, and since the release of the iPhone, many smartphones have also used cutting-edge browsers, so mobile-first websites will naturally use many new features in HTML5 and CSS3, greatly shortening the time required for a web page. development time. Moreover, mobile-first websites are designed for the high-latency network of mobile phones, reducing page size, and loading speed is greatly improved no matter what network environment you are in. The advantages of mobile-first are numerous, which is what led to its popularity.
]]>