Anybody in the real estate development industry knows that data is stored in silos. Sites can be affected by a myriad of factors including, but not limited to: environmental hazards, market conditions, easements, utility access, zoning, design overlays, floodplains, non-conducive soil types, hurricane zones, earthquake zones, deed restrictions, title restrictions, etc... Furthermore, development feasibility is also restricted by the appetites of the capital markets. For example, you might need to build to a 7% Yield to Cost in order to attract capital into a market that has a going-in cap rate of 5.5%. It is also restricted by the current cost to build. Usually in the beginning the cost is given as a rough $ per square foot estimate. Development costs are usually determined by the product type, which is shaped by the size of a site and whether or not the appropriate rent levels exist in the market.
A developer figures all of these factors out at some point in the development process. Some factors will cost more money to figure out and some will be revealed later in the process. That being said, they will all be assessed. The information is out there, it just takes a very long time to retrieve.
In our digital age, where data has almost become a new currency, data is more valuable than ever. What would happen if we were able to break through the data silos in the real estate industry?
First off, the development proforma would be a lot easier to build. Market data would be automatically filled in for each product type, and the best use could be determined realistically. This market data combined with data on setbacks, zoning, height restrictions, etc... would determine the highest and best use for a site. Operating expenses would be calculated using a combination of benchmark data and real estate tax rates. Construction costs could be filled in for a suggested product type. All of these inputs would spit out one output: the residual land value of the parcel.
With all of the data in one place, residual land values could be calculated instantly. A parcel's value would not be determined by its value to a multifamily developer or its value to a self-storage developer. A parcel's value would be analyzed by every major product type. The market constraints and product specific data would help to tell if a parcel is suitable to a hotel developer that can pay a lot of money for the site or a self-storage developer that can pay a minimal amount. Finally, a site could be analyzed by proximity factors. Walk scores, traffic counts, etc... could help add points to a site's development potential and overall value.
Now what if we combined this with 3D modeling? Imagine toggling a 3D building model onto a parcel map. The 3D building would represent the highest and best use of a site as determined by the automatic highest and best use/land residual analysis. Now imagine these 3D buildings on every parcel in your city.
Also imagine the highest and best use as determined by regulations, vs the highest and best use determined by market forces. A city planner could very easily tell where the value of property is being restricted. For example, imagine seeing a parcel that zoning and market data tells you is best for a 5 story apartment project, but taking off zoning and seeing that the site is actually ideal for a 30 story office tower. Imagine looking at a parcel that is zoned for single family, and seeing that its higher and better use could be for smaller lot detached townhouses. The insights into the market would be invaluable from a development and city planning perspective.
Finally, imagine what could happen if every parcel's residual land value was transparent. Could land become a commodity traded on the market? Could you securitize a parcel and make it more liquid? Could land ownership become as accessible as the stock or bond market?
What do you think?
The official Site Identify blog
David Morin (co-founder)