Welcome to the FIA News & Insights, a one-stop resource that includes insights from senior investment professionals on timely market events, their views on the economy and their respective markets. Find updates on the latest media information on Frost Investment Advisors, LLC and the most recent reprints, as well as, archival information for your reference.
Artificial Intelligence Growth is Creating New Investment Opportunities
The recent advancements in artificial intelligence are creating an environment with the potential to revolutionize technology and industries around the globe. We’ve already seen major impacts in hardware and software development by chipmakers and AI models being implemented for commercial and consumer use. However, the rapid pace of adoption is creating a bottleneck in the ability for tech companies to implement future growth plans. The massive building of infrastructure and the power requirements needed for generative AI and data centers to operate has the tech, utilities and energy sectors scrambling to meet the needs. In this article, we’ll discuss some of the background information on power generation and data center design, how industries are addressing the critical issues to build the infrastructure and the new potential investment opportunities that are being created along the way.
Source: Statista, Cloudscene, Apollo Chief Economist. Note: Data as of March 2024
There are more than 7,000 data centers around the world, a figure which has doubled since just 2015. The U.S. leads with over 5,000 facilities spread across the country and more than the rest of the world combined. The second- and third-ranked countries are Germany and the U.K., respectively, with just over 500 data centers each. These facilities can vary in size and scale, based on the requirements needed to serve its users or “tenants.” Smaller data centers may be just a few thousand square feet of a building, serving only few clients, and requiring relatively small amounts of power (1-5 megawatts for example). The larger campuses, made up of multiple buildings with massive electrical requirements for systems and cooling, could easily reach hundreds of megawatts or even more than a gigawatt (1000 MW). To put that number in perspective, the data center corridor in northern Virginia known as Data Center Alley is the world’s largest concentration with hundreds of operating facilities, and has an overall capacity of 2,552 MW, or 2.6 GW. The power provided to this area alone is theoretically enough electricity to light up over half a million U.S. homes.
Source: datacentermap.com
Bloomberg estimates data centers currently consume more power each year than Italy. By 2034, it’s estimated that figure could reach the equivalent of India’s energy use. Behind this massive increase are the power requirements for GenAI. A single ChatGPT request consumes about 10 times the amount of electricity of a Google search and enough to power a 60-watt incandescent light bulb for several minutes. Advanced tasks like generating images or videos, coupled with the increasing number of requests, means power demand is growing exponentially.
Prior to this new growth, demand had been flat for decades, averaging less than 1% increase per year for the past 30 years. But with the sudden shift in demand for additional construction and electricity, it’s estimated that from 2025-28 data center power consumption in the U.S. will increase to almost 100 GW and will represent between 25 to 50% of all new electricity demand.
When comparing these projections to existing U.S. grid capacity and current data centers under construction, it appears we’re already facing close to a 40 GW shortfall in available electrical resources by 2028. Unfortunately, grid expansion is not a viable option, as the planning process for infrastructure and transmission alone can take upwards of five to 10 years. When analyzing potential sources capable of backfilling the shortage, we’re limited to options that have a rapid “speed to power.” Given that short time frame, even including all currently available sources (which is not likely), it’s questionable whether sources exist to fully meet demand. The issue becomes more complicated as you get into the 2030s, when the demand for gigawatts will continue to rise and these sources have been fully exhausted.
Source: Morgan Stanley Research
Within the tech sector, companies have already recognized the issue of demand outpacing supply for both infrastructure built and available power supplied. Massive capital investment is already underway with the four largest mega-cap companies, or “hyperscalers,” and a combined capital expenditure reaching close to a third of a trillion dollars in 2025, and more than doubling their spending for 2023. Microsoft, for example, has indicated that roughly 40% of its capital expenditure is focused on infrastructure components, such as securing real estate, construction of facilities and sourcing power generation. The remaining 60% will be allocated toward the technology components, including GPUs, servers and hardware. While the allocation will differ from company to company, most have indicated that the bulk of spending increases have been earmarked for data center investment.
Source: Morgan Stanley Research
Source: Goldman Sachs Global Investment Research
To understand the power and infrastructure needed for AI workloads, we’ll review the basic anatomy and differences among data centers. These can vary by size and scale, but generally include similar components and perform the same functions. As they are designed for constant use, data centers require access to draw power 24 hours a day, similar to other industrial centers. Consistent power for the data halls loaded with servers and equipment is critical. To maintain as close to 99.99% reliability, facilities are typically rated for multiple levels of redundancies, ranging from a single path to power to multiple backup systems, including for temperature control. Cooling systems are critical to maintaining data centers because operating temperatures and electrical use are significantly higher than the average office building. Varying types can be used from just simply air-cooled to advanced direct-to-chip liquid cooling systems. To provide this level of security and reliability, the backup systems of facilities will include multiple diesel generators, fuel cells or batteries which are automatically enabled in the event of a power outage. To serve a multitude of client sizes and needs, data centers are usually designed to be scalable operations. A few different categories include:
Source: semianalysis.com
Retail data centers tend to be smaller facilities with lower capacity, providing only a few megawatts of power. These are designed for smaller tenants that only lease a few racks of server space representing only a few kilowatts of power each. These typically exist to provide multiple, or different customer types, access to facilities to handle or decrease their overall networking costs.
Wholesale data centers are much larger operations providing power in the 10-30 MW range. A larger scale facility is built for companies that tend to lease entire floors or sections of server space. These centers are frequently designed to be more modular and customizable, with options to build out and utilize more space and servers as companies scale.
Hyperscaler data centers are typically designed and purpose-built as bespoke projects by larger tech companies for their own use. The power consumption in these facilities is significantly larger, from 40 MW up to more than a 1 GW. The footprint and capacities can be so large that they’re frequently designed as a campus with multiple buildings taking up acres rather than square feet. This category is typically self-built and owned by the hyperscalers themselves rather than leased space by third-party providers. The size of the company doesn’t dictate the size or type of the data center used. Large hyperscalers could use wholesale or retail facilities if flexibility in services or location of clients becomes critical.
Source: semianalysis.com
A good example of a large hyperscale data center build is Meta’s announcement of a $10 billion project in Richland Parish, Louisiana. This 2 GW facility’s campus is large enough to cover a significant portion of Manhattan. Announced in December 2024, the custom project is expected to cover over 4 million square feet of multiple buildings and would be the single largest data center project to date. To power this massive facility, Entergy Louisiana plans to build three natural gas-fired combined-cycle combustion turbines with a combined capacity of 2,260 MW, two of which will be located in Richland Parish. In addition, Meta plans to match 100% of the data center’s electricity consumption with clean, renewable power, which includes adding at least 1.5 GW of renewable energy back into the grid. Wind and solar power are currently being discussed as alternative fuel sources for this purpose.
Source: Meta
Source: Frost Investment Advisors
The U.S. electrical grid is powered by many widely dispersed plants using a variety of fuels. In some cases, multiple sources can be used within the same facility. As of 2023, plants using fossil fuels like coal and natural gas accounted for about 60% of power generated. Natural gas was the largest single source, representing 43%. “Clean power,” including renewable sources and nuclear, represent roughly 20% each. Due to environmental concerns and a preference for lower emissions, renewable power is the fastest growing segment across the country. Wind and solar farms are quickly built and frequently come with emissions credits, tax breaks or government subsidies. Combined with the fact that the fuel source has zero actual cost, renewables represent some of the cheapest options for power generation per megawatt-hour. The levelized cost of electricity is a measure to compare different methods of power generation and represents the average total cost to build and operate a facility per unit of electricity over its lifespan. Currently, the national average forecast for 2025 is expected to be roughly $40 per megawatt-hour across all types, and the chart above breaks down expectations of price by fuel type in 2028. But economics per megawatt-hour is not the only, or even the most important, issue regarding electricity generation and consumption. A power plant’s fuel will vary based on a very long list factors, including power required, location, population, access to resources or grid, the regulatory environment, construction costs, fuel costs and environmental impact. A company’s criteria for building and accessing large amounts of power can be just as complex.
When addressing the requirements for the buildout of a data center, companies must evaluate several factors for how the facility will be designed and how power will be sourced. Most often, the overarching factor that dictates the design and power needed is based on the expected size and scale of the facility itself. Whether it’s a retail center with only a few megawatts or a multibuilding campus needing hundreds of megawatts, scale will help determine its location, real estate footprint, and electrical requirements. From there, the process also helps to narrow down the type of equipment and fuel source to be used. When it comes to the large tech companies with very deep pockets, many assume they can afford to custom build and power their projects however they prefer. While this is partially accurate, not all megawatt-hours are created equally, as each fuel source has both advantages and disadvantages, and the reality is projects must balance wants versus needs.
Most, if not all, tech companies have a net-zero emission pledge and prefer to use renewables and clean power sources if possible. The dilemma they face is how to generate capacity at these levels while limiting their carbon footprint. Typical sources include wind, solar, geothermal and hydroelectric. However, the term “clean power” is loosely applied and typically just a designation of power creation with zero carbon emissions. This means a company can also fulfill a zero-emission pledge by earning carbon credit offsets. Meta’s plan referenced above, adding an equivalent amount of solar and wind power back to the grid to offset emissions generated from the natural gas used to actually power its facilities, is an example of this strategy. In addition to these methods, and despite the fact it’s not actually renewable and comes with environmental risks and waste byproducts, nuclear is also considered to be a clean power source as there are zero carbon emissions with generation.
On the other end of the spectrum, traditional fossil fuels are still the most frequently used because of the abundance of resources, legacy production and efficiency of generation. Obviously, clean energy is preferred to carbon emissions, and we have seen a big push to shut down coal fired plants due to the high emissions and environmental impact they cause. But fossil fuels have some advantages and tend to strike a balance where other clean sources come up short in speed to power and capacity.
As compute power and chip innovation increases at a rapid pace, the GPUs used in data centers tend to depreciate quickly. Amazon noted on its fourth-quarter earnings call that it is reducing the useful life of chips down from six to five years, and some analysts theorize that they lose 90% of their economic value over that span. Since this hardware eventually becomes obsolete, there is inherent value for starting operations sooner rather than later. Wind and solar win this race with their ability to build and access power within about six to18 months. More complex projects like hydroelectric can take a decade or more to build. Nuclear is the longest lead time where just the planning phase can take up to 10 years. Nuclear facilities are also notorious for regulators interfering, extending planning or requiring design revisions, leading to inevitable construction delays. Fossil fuel generation tends to fall somewhere in the middle at about three to five years, with the time largely dictated by access to the heavy equipment required, like gas turbines and transformers.
The regulatory environment can also have a major impact on speed to power. The Federal Energy Regulatory Commission oversees everything from interstate transmission and wholesale electricity transactions to energy production and transport to enforcing environmental rules within markets. This tends to extend the planning, approval and construction process, which just lengthens the wait time to connect to power. A unique situation that creates a significant advantage for Texas power and utilities is the Electric Reliability Council of Texas, which is an independent system operator that manages power distribution solely within the state. This makes Texas the only state in the country with its own power grid. By keeping electricity transmission intrastate, Texas largely avoids federal regulation. Rather than answering to FERC, Texas and ERCOT are regulated by the Public Utility Commission of Texas. The reduced federal regulation, along with a more energy friendly environment, means Texas projects can potentially decrease total lead times by several years when compared to other states or markets. This obviously creates a big advantage and motivation for companies eager to access power as quickly as possible.
Source: McKinney & Company
Source: ERCOT
In addition to speed, capacity refers to a power plant’s ability to provide electricity on a consistent and reliable basis, essentially mandatory for data center operations. When looking at the charts below, we can see there are significant differences among fuel sources. Nuclear is the most dependable and can provide access to power nearly 100% of the time, day or night. On the other end of the spectrum, wind and solar are significantly less reliable, with capacity of just 25% and 35% respectively. The lower rates are not surprising, considering the sun faces us just 50% of the day. Overcast days can also interrupt solar generation, which drives the reliability down even further. Similarly, wind farms may be cheap and quick to build, but if the wind isn’t blowing, the turbines are essentially useless. To alleviate these disruptions and deficiencies, storage technology, such as battery and fuel cells, is rapidly being developed to help mitigate the capacity issues with these sources.
A good example of how inconsistent wind and solar can be, the two charts below show the ERCOT fuel mix during two days during a single week, on Jan. 28 and Feb. 2, 2025. On a cold, overcast day in January, we can see that solar and wind were not strong contributors to the fuel mix, and natural gas was needed to provide the bulk of the electricity over the 24-hour period. However just five days later, when the wind is blowing on a sunny Texas day, we see the opposite effect, with wind and solar actually making up the majority of generation for the day.
Source: gridstatus.io
In addition to the traditional sources for power already discussed, the opportunity that provides the fastest speed to power and some of the best capacity available is the conversion of cryptocurrency mining sites. In fact, in relation to other power sources, crypto sites can almost be considered turnkey, by converting servers and operations to focus on AI compute power rather than crypto mining. The convenience of these sites lies in the fact that they are already designed with many of the needs or requirements of data centers. With their 24/7 operations requiring significant power loads, electrical and cooling needs, they operate in a similar capacity. The major drawbacks here are based on availability, as the number of sizable operations is already limited and conversion must be economically beneficial to owners. Ultimately, there is a tradeoff in sacrificing potential crypto profitability to a revenue stream based on leasing space or power to a hyperscaler.
Source: U.S. Energy Information Administration
Another factor in the process of securing access to power is actual location of generation facilities, which affects the grid, and transmission and distribution lines. The traditional method of grid access is with a power plant generating electricity, passing along transmission lines, then on to substations and distribution lines, and eventually the commercial or residential customer. A different approach which offers a more customized solution is co-locating the power generation and supply alongside the consumption. The charts below illustrate the differences and uses a wind turbine as an example of power generation is tied directly to feeding the end customer, effectively “behind the meter.” However, the source could be any fuel type, including nuclear, renewable or natural gas. Not only does this method provide a customer with direct access to the required power needed, but it can also reduce overall electricity costs and help to relieve the stress on transmission lines and grid. This process may also provide quicker access to power as they are uniquely designed to power and supply a particular customer and do not involve transmission. Several recent large-scale data center deals have been structured as “behind the meter” to take advantage of these benefits.
Source: windadvantage.com
Source: Morgan Stanley Research
The evolution and adoption of GenAI appears to be the beginning of another technology revolution. The global consulting firm McKinsey & Co. estimates that the global economic impact has the potential to add between $2.6 and $4.4 trillion annually. With demand growing exponentially and expected to triple to 100 GW, it’s estimated an investment of roughly $85 billion in new infrastructure and power will be required by the end of the decade. The rapid expansion already has the tech sector taking action by making large investments in new projects and signing multi-decade deals to secure power as quickly as possible. Two recent examples are Amazon Web Services , which has signed a $650 million contract with Talen Energy to acquire its 960 MW Cumulus data center campus co-located and powered by the Susquehanna nuclear plant in Pennsylvania. Another is Microsoft, which has signed a 20-year contract with Constellation Energy Corp. to launch the Crane Clean Energy Center. This agreement will enable the restart of the 835 MW Three Mile Island Unit 1 nuclear reactor by 2028. These new deals are just two recent examples, but they show how critical quickly securing large amounts of reliable power has become. The economic structure of these deals also shows how demand is outstripping supply by the willingness of these companies to pay substantial premiums over the standard, or regulated, utility rates.
These new opportunities are still in the beginning stages, and we expect more will develop as AI technology continues to evolve, making this a long-term investment theme over the next ten years or more. As we’ve seen, the benefits are not limited to tech companies, and demand is creating more growth and opportunity in other sectors and industries. With utilities and power plants being contracted to build and generate more and more power, the prerequisite for the plants to provide the needed electricity pulls the energy sector into this growth story as well. Based on the focus of speed to power and capacity issues, we already have major deals with nuclear power providers. However, with the current demand forecast, it’s clear available nuclear cannot fully meet the need, and we can expect to see virtually all power sources to be used. Based on the advantages of access, availability, and reliability, we expect to see natural gas generation as an ideal fuel and power source to help close the demand gap, especially within in Texas. With the vast real estate available, energy companies eager to find a better market for their gas production, and a utility commission inviting new generation, the new Stargate project is a prime example of what’s to come. This massive AI infrastructure project with an expected investment of $500 billion over the next four years plans to build up to new twenty data centers across the nation. The first of which is set to be built in Abilene, Texas, fully powered and running on Texas energy and infrastructure. If the growth and demand projections for the future are accurate, projects and developments such as these could be just the beginning.
Frost Investment Advisors, LLC, a wholly owned subsidiary of Frost Bank, one of the oldest and largest Texas-based banking organizations, offers a family of mutual funds to institutional and retail investors. The firm has offered institutional and retail shares since 2008.
Frost Investment Advisors' (FIA) family of funds provides clients with diversification by offering separate funds for equity and fixed income strategies. Registered with the SEC in January 2008, FIA manages more than $4.7 billion in mutual fund assets and provides investment advisory services to institutional and high-net-worth clients, Frost Bank, and Frost Investment Advisors’ affiliates. As of Feb. 28, 2025, the firm has $5.2 billion in assets under management, including the mutual fund assets referenced above. Mutual fund investing involves risk, including possible loss of principal. Current and future portfolio holdings are subject to risks as well.
To determine if a fund is an appropriate investment for you, carefully consider the fund’s investment objectives, risk, charges, and expenses. There can be no assurance that the fund will achieve its stated objectives. This and other information can be found in the Class A-Shares Prospectus, Investor Shares Prospectus or Class I-Shares Prospectus, or by calling 1-877-71-FROST. Please read the prospectus carefully before investing.
Frost Investment Advisors, LLC (the "Adviser") serves as the investment adviser to the Frost mutual funds. The Frost mutual funds are distributed by SEI Investments Distribution Co. (SIDCO) which is not affiliated with Frost Investment Advisors, LLC or its affiliates. Check the background of SIDCO on FINRA's http://brokercheck.finra.org/.
Frost Investment Advisors, LLC provides services to its affiliates, Frost Wealth Advisors, Frost Brokerage Services, Inc. and Frost Investment Services, LLC. Services include market and economic commentary, recommendations for asset allocation targets and selection of securities; however, its affiliates retain the discretion to accept, modify or reject the recommendations.
Frost Wealth Advisors (FWA) is a division of Frost Bank [a bank subsidiary of Cullen/Frost Bankers Inc. (NYSE: CFR)]. Brokerage services are offered through Frost Brokerage Services, Inc., Member FINRA/SIPC, and investment advisory services are offered through Frost Investment Services, LLC, a registered investment adviser. Both companies are subsidiaries of Frost Bank.
This commentary is as of March 21, 2025, for informational purposes only and is not investment advice, a solicitation, an offer to buy or sell, or a recommendation of any security to any person. Managers’ opinions, beliefs and/or thoughts are as of the date given and are subject to change without notice. The information presented in this commentary was obtained from sources and data considered to be reliable, but its accuracy and completeness is not guaranteed. It should not be used as a primary basis for making investment decisions. Consider your own financial circumstances and goals carefully before investing. Certain sections of this commentary contain forward-looking statements that are based on our reasonable expectations, estimates, projections, and assumptions. Forward-looking statements are not indicators or guarantees of future performance and involve certain risks and uncertainties, which are difficult to predict. Past performance is not indicative of future results. Diversification strategies do not ensure a profit and cannot protect against losses in a declining market. All indices are unmanaged, and investors cannot invest directly into an index. You should not assume that an investment in the securities or investment strategies identified was or will be profitable.
NOT FDIC Insured • NO Bank Guarantee • MAY Lose Value
Get the latest posts straight to your inbox