A/B testing originated in advertising as a way to tell which of two types of communications (like two versions of an advertisement) where most effective in driving sales. Since the seminal book by Hopkins on "Scientific Advertising" (1923), the technique has been used in retail as well (e.g. in deciding on effective store design) and most recently in the design of websites. It is with regard to the latter that the term has rapidly gained even more attention and many (mostly) commercial websites now regularly do A/B testing to test certain design options (e.g. should one use the words "Buy Now" or "Order" for the check-out box). Some refer to such a website design test with "split test" because the test actually splits the visitors of the website in different groups, each seeing a different version of the website, and monitors key performance indicators such as check-outs, likes, or clicks.
Why is this just the same as experimental research? The gist of A/B testing is that you manipulate at least one aspect (e.g., call-to-action, color, piece of text, positioning, ...) of your website. Experimental research would say that aspect is your independent variable. To do so, you have to design at least two versions of that website, each having a different value on the design variable you want to test (e.g., "buy now" vs "order" call-to-action). Each instance of the website could be called a condition. You then randomize the visitors to the website such that you can observe the visitors' behavior in each. This observation is what one would label the dependent measure as one thinks its value will at least partly depend on the value of the manipulated design variable. In sum, this is the description of the most basic experiment (and of the majority of A/B tests).
You see, the basics for the two are the same. They even share a bit of history. Though experimental research is older than A/B testing is, the first landmark publication is from the same era as Hopkins' book: Fisher's "The Arrangement of Field Experiments" dates back to 1926. Despite that similarity, I do believe that current A/B testing has not yet benefitted from the long tradition and theoretical advancements academics have made since then. Experimentation has been a core research method in agriculture, biomedicine, psychology, communication, etc. This resulted in crucial insights in the design, execution, measurement, and analysis of experiments. If these advances are being applied in A/B testing than they are surely not wildly shared.
(I will follow up on this post with one or more posts on specific experimentation insights that could be worthwhile from an A/B testing perspective)
Why does the above actually matter?
For students it implies that there might be a lot of transferable skills hiding in your courses. Try to understand experimental research, its subtleties, and the creativity involved with it. You might find it even more relevant after graduation when confronted with your own instance of A/B testing whatever content you will be producing professionally. For testers it implies that tests could benefit a lot from the extensive academic tradition of experimental research.
(Disclaimer: A great post by @peeplaja inspired me for this post. Thanks to @ElienDJ for tweeting about that post).