One of the most pronounced unintended effects of monetary and fiscal policy is response lag. When changing their approach to policy, central banks usually cannot have an instant impact on the economy. Interest rate changes, for example, are subject to a fairly long lag before it takes effect. Even monetary supply controls do not have an immediate impact on the economy due to money velocity.
An issue that arises then is that the policy changes are enacted at one point in time and only start making waves at some later point. Any delay to effects, when changes are implemented, makes the data muddy and the exact influence of monetary policy harder to estimate (although, I might concede that the theoretical concerns may seem more drastic than they are in practice).
All of that is compounded by the fact that, even when the effects have appeared, the data on them needs to be collected, parsed, analyzed, and is only then published. As such, we have two sources of delay – natural pass-through and monetary policy lag and data collection practices.
Monetary policy lag is hard to solve
As it currently stands, it’s nearly impossible to solve monetary policy lag as the effect is largely due to the size of economies. Even in imaginary small economies (e.g., of 3 participants) that somehow have banking, money does not move instantly as it still needs to be traded for goods and delivered to the recipient.
When economies expand and institutions appear, delays continue increasing. Such is the case with monetary policy, especially when instruments like interest rate changes are implemented. These may immediately impact borrowers, but it takes a significantly longer time to get passed down to the economy at large.
If we add the various institution regulation and communication requirements, KYC processes, and numerous other highly important safeguards, the effects continue to compound. As such, monetary policy lag is somewhat of a necessity borne out of the way we conduct the economy.
There is some speculation that a central bank digital currency (also known as CBDC) could improve general money velocity, which would, in turn, potentially reduce monetary policy lag. Yet, these benefits are still regarded with an air of caution even on money velocity, so it’s unknown whether CBDC would affect monetary policy lag.
Any effect would also depend on the implementation. Most CBDC proposals still rely on institutions that would stand between the end user and the central bank, which means that the delay caused by middlemen would not be solved.
As such, even in the best cases, wherein CBDC is implemented as a direct central bank currency that end users get to use without middlemen, monetary policy would still have delayed effects on the economy. It’s quite a bit different, however, with data collection.
Data lag is easy to solve
Data collection is usually performed by manually collecting data from various sources and then aggregating it in statistical databases. Many of the important measures, such as inflation and employment rates, can potentially be collected through alternative data instead of using traditional methods.
Web scraping is the primary tool that can be employed to generate alternative data for the effects of monetary policy. For example, to measure the effect interest rate changes had on inflation, a central bank could scrape ecommerce and retail data from the publicly available internet to measure how much prices have changed.
I should note, however, that would be a somewhat imperfect metric as inflation should showcase a general fall in purchasing power, which might not be reflected in ecommerce products as the range is limited and there are scarcely any services available.
Yet, such an approach was rather successfully performed in academia. The Billion Prices Project has produced numerous research papers and alternative measures of CPI, especially in countries where access to objective information might have been restricted. All of it was performed by employing large scale scraping and collecting pricing data from online retailers.
Employment rates, while harder to measure, can be assessed through the appearance, disappearance, and publicly stated hiring of staff. As most of the hiring nowadays happens through online services and websites, all of the information is readily available for governments. Collecting historical data on that can give insight into how much employment rates are changing.
At the very least, the velocity of employment can be measured, which can serve as a proxy signal for general rates. All of such data, however, is highly sensitive and should only be undertaken with legal counsel, even if the actor, in this case, would be a central bank.
So, web scraping can make macroeconomic measures more readily available for both central banks and researchers. It also makes the acquisition nearly instantaneous, as web scraping can extract data from nearly any source in real time.
As such, automated publicly available data collection has the potential to completely remove the issues associated with data lag. In other words, it may solve one part of the equation when estimating the effect of monetary policy.
Conclusion
While web scraping has still not seen widespread adoption in the public and academic sectors, its potential to solve or minimize existing problems is immense. It may be one of the most revolutionary practices that can help advance large scale statistical analysis and provide more immediate and actionable insights than any other data collection methods that exist right now.
Additionally, collection processes are no longer expensive as web scraping has evolved significantly over the past few years. In fact, companies have even started providing tools for free for non-commercial use cases through projects such as 4beta, created by Oxylabs.