طراحی پورتال های سازمانی شرکت پروجان

شیرپوینت و پراجکت سرور پروجان

استقرار شیرپوینت و پراجکت سرور

مسیر سایت

کتاب Learning HTTP/2.pdf

Learning HTTP/2.pdf 

دانلود رایگان کتاب Learning HTTP/2.pdf

A PRACTICAL GUIDE FOR BEGINNERS

Stephen Ludin     Javier Garza
Copyright © 2017 Stephen Ludin

لینک دانلود کتاب Learning HTTP/2.pdf

 

Contents

Preface vii
Foreword xiii

 

1. The Evolution of HTTP  1
HTTP/0.9 and 1.0 2
HTTP/1.1 3
Beyond 1.1 4
SPDY 4
HTTP/2 4

 

2. HTTP/2 Quick Start 7
Up and Running 7
Get a Certificate 8
Use an Online Generator 8
Self Signed 8
Let’s Encrypt 8
Get and Run Your First HTTP/2 Server 9
Pick a Browser 10

 

3. How and Why We Hack the Web 11
Performance Challenges Today 11
The Anatomy of a Web Page Request 11
Critical Performance 14
The Problems with HTTP/1 16
Web Performance Techniques 21
Best Practices for Web Performance 22
Anti-Patterns 30

Summary 31

 

4. Transition to HTTP/2 33
Browser Support 33
Moving to TLS 34
Undoing HTTP 1.1 “Optimizations” 36
Third Parties 38
Supporting Older Clients 38
Summary 39

 

5. The HTTP/2 Protocol 41
Layers of HTTP/2 41
The Connection 42
Frames 44
Streams 47
Messages 48
Flow Control 51
Priority 52
Server Push 53
Pushing an Object 53
Choosing What to Push 55
Header Compression (HPACK) 56
On the Wire 58
A Simple GET 58
Summary 63

 

6. HTTP/2 Performance 65
Client Implementations 65
Latency 67
Packet Loss 70
Server Push 72
Time to First Byte (TTFB) 74
Third Parties 76
HTTP/2 Anti-Patterns 81
Domain Sharding 81
Inlining 82
Concatenating 82
Cookie-less Domains 82
Spriting 82
Prefetch 83
Real-World Performance 83
Performance Measurement Methodology 84

Study 1: www.facebook.com 84
Study 2: www.yahoo.com 86
Summary 89

 

7. HTTP/2 Implementations 91
Desktop Web Browsers 91
TLS Only 91
Disabling HTTP/2 92
Support for HTTP/2 Server Push 92
Connection Coalescing 92
HTTP/2 Debugging Tools 92
Beta Channel 93
Mobile 93
Mobile App Support 93
Servers, Proxies, and Caches 94
Content Delivery Networks (CDNs) 95
Summary 95

 

8. Debugging h2 97
Web Browser Developer Tools 97
Chrome Developer Tools 97
Firefox Developer Tools 104
Debugging h2 on iOS Using Charles Proxy 106
Debugging h2 on Android 108
WebPagetest 109
OpenSSL 109
OpenSSL Commands 110
nghttp2 110
Using nghttp 110
curl 112
Using curl 112
h2i 114
Wireshark 115
Summary 116

 

9. What Is Next? 117
TCP or UDP? 117
QUIC 118
TLS 1.3 119
HTTP/3? 120
Summary 120

A. HTTP/2 Frames 121
B. Tools Reference 131
Index 133

 

Preface

HTTP/2, also called h2 for simplicity, is a major revision of the HTTP network protocol used by the World Wide Web, meant to improve the perceived performance of loading web content.
Since HTTP/1.1 (h1) was approved in 1999, the web has changed significantly from mostly text-based web pages that weighed a few kilobytes and included less than 10 objects, to today’s media-rich websites that weigh on average over 2 megabytes,1 and include an average of 140 objects. However, the HTTP protocol used to deliver the web content did not change in the intervening years, making room for a new industry of Web Performance experts who specialize in coming up with workarounds to help the aging protocol load web pages faster. People’s expectations for performance have changed too—while in the late ’90s people were willing to wait up to seven seconds for a page to load, a 2009 study by Forrester Research found that online shoppers expected pages to load under two seconds, with a large share of users abandoning sites where pages take over three seconds to load. A recent study by Google showed that a even a delay of 400 milliseconds (the blink of an eye) will cause people to search less.
That’s why h2 was created—a protocol that can better handle today’s complex pages without sacrificing speed. HTTP/2’s adoption has been increasing2 as more website administrators realize they can improve the perceived performance of their websites with little effort.
We all use h2 every day—it powers some of the most popular sites like Facebook, Twitter, Google, and Wikipedia—but many people don’t know about it. Our goal is to educate you on h2 and its performance benefits, so you can get the most out of it.

 

Who Should Read This Book
Regardless of your role, if you find yourself responsible for any part of the life cycle of a website, this book will be useful for you. It is intended for people building or running websites, and in general anybody considering implementing h2, or looking to understand how it works. We expect you to be familiar with web browsers, web servers, websites, and the basics of the HTTP protocol.

 

 

What This Book Isn’t
The goal of this book is to teach you h2 and help you make the most out of the new version of the HTTP protocol. It is not a comprehensive guide for all h2 clients, servers, debug tools, performance benchmarking, etc. This book is intended for people not familiar with HTTP/2, but even experts may still find it to be a convenient resource.

 

 

Acknowledgments
We would like to thank Akamai’s h2 core team and Moritz Steiner, one of Akamai’s researchers on the Foundry team who coauthored several h2 papers with Stephen; Pierre Lermant (for his good sense of humor, attention to detail, and his contribution for reviewing and contributing content for this book); Martin Flack (for his often illuminating Lisp implementation and also a member of Akamai’s Foundry team); Jeff Zitomer (for his support, encouragement, and contagious smile); Mark Nottingham (for his contributions to the h2 protocol); Pat Meenan (for all the countless contributions to Webpagetest.org, probably the best free tool for Measuring Web Performance); and Andy Davies (who created the “WebPagetest Bulk Tester,” which we used extensively across this book).
Thanks to our editors Brian Anderson, Virginia Wilson, and Dawn Schanafelt for making everything so easy, and all the h2 experts who provided feedback and ideas for this book: Ilya Grigorik, Patrick McManus, Daniel Stenberg, Ragnar Lonn, Colin Bendell, Mark Nottingham, Hooman Beheshti, Rob Trace, Tim Kadlec, and Pat Meenan.

 

Foreword 

In 2009, HTTP/1.1 was well over a decade old, and arguably still the most popular application protocol on the internet. Not only was it used for browsing the web, it was the go-to protocol for a multitude of other things. Its ease of use, broad implementation, and widely shared understanding by developers and operation engineers gave it huge advantages, and made it hard to replace. Some people were even starting to say that it formed a “second waist” for the classic hourglass model of the internet’s architecture.
However, HTTP was showing its age. The web had changed tremendously in its lifetime, and its demands strained the venerable protocol. Now loading a single web page often involved making hundreds of requests, and their collective overhead was slowing down the web. As a result, a whole cottage industry of Web Performance Optimization started forming to create workarounds.
These problems were seen clearly in the HTTP community, but we didn’t have the mandate to fix them; previous efforts like HTTP-NG had failed, and without strong support for a proposal from both web browsers and servers, it felt foolish to start a speculative effort. This was reflected in the HTTP working group’s charter at the time, which said:
The Working Group must not introduce a new version of HTTP and should not add new functionality to HTTP.
Instead, our mission was to clarify HTTP’s specification, and (at least for me) to rebuild a strong community of HTTP implementers. 

That said, there was still interest in more efficient expressions of HTTP’s semantics, such as Roy Fielding’s WAKA proposal1 (which unfortunately has never been completed) and work on HTTP over SCTP2 (primarily at the University of Delaware).

Sometime after giving a talk at Google that touched on some of these topics, I got a note from Mike Belshe, asking if we could meet. Over dinner on Castro Street in Mountain View, he sketched out that Google was about to announce an HTTP replacement protocol called SPDY.
SPDY was different because Mike worked on the Chrome browser, and he was paired with Roberto Peon, who worked on GFE, Google’s frontend web server. Controlling both ends of the connection allowed them to iterate quickly, and testing the protocol on Google’s massive traffic allowed them to verify the design at scale.
I spent a lot of that dinner with a broad smile on my face. They were solving real problems, had running code and data from it. These are all things that the Internet Engineering Task Force (IETF) loves.
However, it wasn’t until 2012 that things really began to take off for SPDY; Firefox implemented the new protocol, followed by the Nginx server, followed by Akamai. Netcraft reported a surge in the number of sites supporting SPDY. It was becoming obvious that there was broad interest in a new version of HTTP.
In October 2012, the HTTP working group was re-chartered to publish HTTP/2, using SPDY as a starting point. Over the next two years, representatives of various companies and open source projects met all over the world to talk about this new protocol, resolve issues, and assure that their implementations interoperated.
In that process, we had several disagreements and even controversies. However, I remain impressed by the professionalism, willingness to engage, and good faith demonstrated by everyone in the process; it was a remarkable group to work with. For example, in a few cases it was agreed that moving forward was more important than one person’s argument carrying the day, so we made decisions by flipping a coin.
While this might seem like madness to some, to me it demonstrates maturity and perspective that’s rare. In December 2014, just 16 days over our chartered deadline (which is early, at least in standards work), we submitted HTTP/2 to the Internet Engineering Steering Group for approval.

The proof, as they say, is in the pudding; in the IETF’s case, “running code.” We quickly had that, with support in all of the major browsers, and multiple web servers, CDNs, and tools.
HTTP/2 is by no means perfect, but that was never our intent. While the immediate goal was to clear the cobwebs and improve web performance incrementally, the bigger goal was to “prime the pump” and assure that we could successfully introduce a new version of HTTP, so that the web doesn’t get stuck on an obsolete protocol. By that measure, it’s easy to see that we succeeded. And, of course, we’re not done yet.

 

The Evolution of HTTP

In the 1930s, Vannevar Bush, an electrical engineer from the United States then at MIT’s School of Engineering, had a concern with the volume of information people were producing relative to society’s ability to consume that information. In his essay published in the Atlantic Monthly in 1945 entitled, “As We May Think,” he said:
Professionally our methods of transmitting and reviewing the results of research are generations old and by now are totally inadequate for their purpose. If the aggregate time spent in writing scholarly works and in reading them could be evaluated, the ratio between these amounts of time might well be startling.
He envisioned a system where our aggregate knowledge was stored on microfilm and could be “consulted with exceeding speed and flexibility.” He further stated that this information should have contextual associations with related topics, much in the way the human mind links data together. His memex system was never built, but the ideas influenced those that followed.
The term Hypertext that we take for granted today was coined around 1963 and first published in 1965 by Ted Nelson, a software designer and visionary. He proposed the concept of hypertext to mean:
…a body of written or pictorial material interconnected in such a complex way that it could not conveniently be presented or represented on paper. It may contain summaries, or maps of its contents and their interrelations; it may contain annotations, additions and footnotes from scholars who have examined it. 1 Nelson wanted to create a “docuverse” where information was interlinked and never deleted and easily available to all. He built on Bush’s ideas and in the 1970s created a prototype implementation of a hypertext system with his project Xanadu. It was unfortunately never completed, but provided the shoulders to stand on for those to come.

HTTP enters the picture in 1989. While at CERN, Tim Berners-Lee proposed2 a new system for helping keep track of the information created by “the accelerators” (referencing the yet-to-be-built Large Hadron Collider) and experiments at the institution.
He embraces two concepts from Nelson: Hypertext, or “Human-readable information linked together in an unconstrained way,” and Hypermedia, a term to “indicate that one is not bound to text.” In the proposal he discussed the creation of a server and browsers on many machines that could provide a “universal system.”

 

HTTP/0.9 and 1.0
HTTP/0.9 was a wonderfully simple, if limited, protocol. It had a single method (GET), there were no headers, and it was designed to only fetch HTML (meaning no images—just text).
Over the next few years, use of HTTP grew. By 1995 there were over 18,000 servers handling HTTP traffic on port 80 across the world. The protocol had evolved well past its 0.9 roots, and in 1996 RFC 19453 codified HTTP/1.0.
Version 1.0 brought a massive amount of change to the little protocol that started it all. Whereas the 0.9 spec was about a page long, the 1.0 RFC came in at 60 pages. You could say it had grown from a toy into a tool. It brought in ideas that are very familiar to us today:
• Headers
• Response codes
• Redirects
• Errors
• Conditional requests
• Content encoding (compression)
• More request methods
and more. HTTP/1.0, though a large leap from 0.9, still had a number of known flaws that had to be addressed—most notably, the inability to keep a connection open between requests, the lack of a mandatory Host header, and bare bones options for caching. These three items had consequences on how the web could scale and needed to be addressed.

 

HTTP/1.1
Right on the heels of 1.0 came 1.1, the protocol that has lived on for over 20 years. It fixed a number of the aforementioned 1.0 problems. By making the Host header mandatory, it was now possible to perform virtual hosting, or serving multiple web properties on a singe IP address. When the new connection directives are used, a web server is not required to close a connection after a response. This was a boon for performance and efficiency since the browser no longer needed to reestablish the TCP connection on every request. Additional changes included:
• An extension of cacheability headers
• An OPTIONS method
• The Upgrade header
• Range requests
• Compression with transfer-encoding
• Pipelining

HTTP/1.1 was the result of HTTP/1.0’s success and the experience gained running the older protocol for a few years.

 

لینک دانلود کتاب Learning HTTP/2.pdf

 

 

 

عضویت در خبرنامه