Abstract
From the use of a pinhole camera placed under a water tank, which was proposed almost 100 years ago, to the application of modern digital cameras mounted with sophisticated fisheye lenses, acquisition methods for capturing hemispherical photographs undergone vigorous research and development. Over the past few decades, such photographs have been extensively used in evaluating energy and environmental aspects in urban contexts. In this review article, the advantages, limitations, and challenges of the various methods of acquiring photographs are described and compared. This involves both the devices themselves and the software tools. Several methods of direct acquisition of hemispheric photographs involving digital cameras, smartphones, the use of drones for photographs at elevations, and the application of thermal imaging technologies are discussed in detail. Indirect methods for generating hemispheric photographs are also discussed, highlighting the use of images from applications such as Google Street View (GSV). Based on a review of technical literature, several applications in energy and the environment that use information from hemispheric photographs as an analysis tool are presented and discussed. Among others, the following are discussed: the quantification of solar radiation potential; the assessment of indicators of local temperature and level of thermal comfort for pedestrians in urban areas; indoor and outdoor daylighting; and air and light pollution. Finally, several potential future research directions for the use of hemispherical photographs in built environments are discussed. These include advances in image processing, use of thermal imaging, solar potential assessment of solar-powered vehicles, applications of drone-mounted hemispherical photography, and fisheye videos.
1 Introduction
The ultra-wide-angle lens with a view subtending 180 deg was introduced nearly 100 years ago when Bond [1] and Hill [2] presented it as a practical tool for performing cloud-related surveys. As the photograph taken by such a lens is the projection of a hemisphere in front of it on to a flat plate, they are commonly known as hemispherical photographs (hereafter termed “photographs”), as illustrated in Fig. 1. Such a photograph is possible if the camera eye is opened to a denser medium followed by a less dense medium, very similar to how fishes view the world from an aqueous medium. Therefore, these photographs are popularly known as fisheye photographs. The very first use of such a camera was reported by Wood [3], when photosensitive film was placed inside a water tank and viewed through a pinhole camera. However, for convenience, liquids are not used in such lenses at all. Instead, a spherical glass surface is installed in the real lens in such a way that once the incoming ray is bent, it proceeds without further deviation. In scenarios where these photographs are taken facing upward in open (or semi-open) sky, they are frequently termed whole-sky photographs and sky-dome photographs.
The polar-coordinate system, having its origin at the center of the photograph, is applied to specify the location of a point, such that the angle (measured from a reference, generally true north in solar energy related applications) and the radius measured from the origin are proportional to the real azimuth and altitude angles, respectively, as shown in Fig. 2. The mathematical relationship between the radial measurements on the fisheye photograph and the altitude angle in the actual surroundings is an important characteristic of the photographs [4]. This relationship depends upon the projection (or mapping) that the device (camera, lens, reflector) or the method used has provided while generating such photographs. The Equidistant projection is always preferred in scientific applications [5]. In this projection, the relationship is mathematically linear and so the transformations are straight forward. The Equal-area projection (also known as the Equisolid-angle projection) is another commonly used projection. In this, the ratio of the area of solid angles in the photograph and the actual surroundings remains constant. It is usually preferred when the covered areas in the photographs are to be analyzed [6]. Orthographic and Stereographic projections are not recommended as they yield a more distorted image than the others, which eventually affects the quality of analysis [7].
In the early days, such photographs were rigorously employed in plant research and were popularized as canopy photographs [8,9]. In fact, the substantial development in acquisition methods and analysis procedures associated with these photographs was achieved by the agricultural and forest ecology communities, which rapidly advanced their understanding of plant ecology, especially in connection with meteorology. A brief account is given here.
Evans and Coombe [10] used these photographs for studying the light intensity in woodland canopies while also significantly reducing the number of manual observations and improving the quality of results compared with conventional methods. Madgwick and Brumfield [11] developed a computer program that replaced the manual analysis of these photographs while combining the photographs with densiometric measurements. This led to greater accuracy and consistency in results that were previously subject to interference by inter- and intra-observer variations. Bonhomme and Chartier [12] improved the speed of the analysis process by incorporating a mechanical digitization device consisting of a light sensor for sampling the light intensities across the different portions of the photographs. Later, Chan et al. [13] invented a device to analyze the large numbers of photographs produced. In their device, a slide projector was used to project the negative of the photograph onto a plotter with a light detection sensor. A program written in a microcomputer directed the movement of the arm to sense the light in pre-defined circular grid paths. The results were returned to the microcomputer through an analog-to-digital module which was analyzed to calculate various plant-related characteristics. Becker et al. [14] combined video technology and image processing for analysis purposes and eventually overcame the time and cost-related issues with printed photographs. Frazer et al. [15] practically compared the accuracy of results obtained while analyzing the photographs captured using digital and film cameras. The recommendation was to take a cautious approach when digital cameras were used. Schwalbe et al. [16,17] developed a fully automatic method, image processing method, for classifying the pixels of digital hemispherical images as sky or vegetation. The method showed promising results in a wide range of weather conditions and was independent of the surroundings in terms of its type and density. A more sophisticated, fully automated, multi-purpose, and multi-platform computer analysis package for canopy photographs, CIMES, was developed by Gonsamo et al. [18]. Recently, Wan et al. [19] suggested that the use of a traditional single-lens reflex (SLR) camera with a hemispherical lens is expensive and therefore not suitable for long-term outdoor measurements. They proposed a remote acquisition method based on an in-field imbedded node with a low-cost image sensor and fisheye lens, and a host computer, connected through a 3G network.
As a result of the excellent progress and promising results achieved by the plant ecology community, these photographs began to gain popularity in other fields of research and development. Examples include nowcasting (using cameras with a spherical mirror [20], cameras equipped with fisheye lenses [21,22], mobile cameras [23], and security cameras [24]) and forecasting (using camera arrays [25], cameras with a spherical mirror [26], and cameras mounted with fisheye lenses such as security cameras [27,28], network cameras [29], high-resolution commercial cameras [30,31], custom-built cameras [32], and waterproof cameras [33]) to assess solar radiation by detecting cloud movements in open areas; determining the location of the sun for solar photovoltaic (PV) and solar concentrator tracking applications (such as finding the location of the sun [34]; optimizing sun tracking on cloudy days [35]; developing single-axis [36] and dual-axis sun trackers [37,38]); monitoring and controlling the solar flux in power plants [39,40]; measuring the angular distribution of light from solar reflectors [41–43]; estimating aerosol characteristics [44]; and determining solar absorptance for the clothed human body [45,46].
Over the past few decades, a considerable amount of research has been focused on making urban regions more sustainable, healthy, and liveable [47,48]. This is because currently, urban areas are occupied by more than half of the world’s population [49]. It is also forecast that the majority of the growth in population in the coming decades will be in cities rather than rural areas [50]. Maintaining the quality of outdoor and indoor thermal comfort, fair accessibility to daylight and efficiently tapping opportunities to generate on-site thermal and electric energy using solar radiation conversion devices have been highlighted as serious concerns in these densely populated urban areas [51–53]. The literature shows that fisheye photographs have played a vital role, being integrated into the fundamental frameworks of methods and case studies pertained to analyzing and resolving the aforementioned problems in urban regions.
This paper provides a review of the literature related to energy and environment assessments in an urban context where these photographs have been employed. It begins by describing and comparing the advantages, limitations and challenges of various acquisitioning methods for these photographs, presented in Sec. 2. Then, a detailed account of applications incorporating these photographs is provided in Sec. 3. Potential future research directions are discussed in Sec. 4. A summary is provided in Sec. 5.
2 Acquisition Methods
Acquisition methods comprise the devices and software tools used to capture or generate fisheye photographs, before they are analyzed. The methods can broadly be classified into (i) direct and (ii) indirect methods, as explained here.
2.1 Direct Methods.
Direct methods are those in which photographs are captured straight from a suitable device. According to the literature, these are the most frequently employed methods so far, and include using film, digital, smartphone, drone, thermal, or specialized cameras mounted with fisheye lenses or spherical mirrors.
2.1.1 Film Camera With Fisheye Lens.
In early days, film cameras were the only devices available for taking hemispherical photographs. Holmer [54] made use of a film camera with an equidistance fisheye lens to capture photographs of urban canyons. The photographs were enlarged to 20 cm diameter and printed on photographic paper. For analysis, the photographs were transferred to a computer using a digitizing tablet. With such cameras, the main disadvantages were the time and cost required for scanning and processing the negatives [55].
2.1.2 Digital Camera with Fisheye Lens.
Due to technological advancements, digital cameras became popular as they eliminated the issues associated with film cameras. The major advantages include seeing the photographs immediately in the field, which can then be retaken if required, and storing the photographs in digital format, which can then easily be transmitted to a computer using simple steps [55]. The two main characteristics of these cameras are resolution (measured in megapixels or MP) and sensor type. Resolution is defined as the ability to reproduce fine detail in an image [56]. In other words, it corresponds to the visual quality of the photograph. The resolution of digital cameras has been greatly improved over the years and at present they can achieve up to 120 MP [57]. However, the highest resolution of fisheye photographs taken for urban applications, as reported in the literature, is 24.30 MP [58]. In digital cameras, the most common types of sensors are the Charged Coupled Device (CCD) and the Complementary Metal Oxide Semiconductor (CMOS). CCD technology is more mature than CMOS, but suffers from several drawbacks, including high cost and complex power supplies and support electronics [59]. Hence, most of the studies performed in the past decade have used CMOS-based cameras. The camera models and their characteristics, along with reference to the studies they were used in, are chronologically listed in Table 1.
Camera model | Release | Resolution | Sensor type | Studies |
---|---|---|---|---|
Nikon COOLPIX 950 [60] | February, 1999 | 1.92 MP | CCD | [61–63] |
Nikon COOLPIX 800 [64] | September, 1999 | 2.11 MP | CCD | [65] |
Nikon COOLPIX 990 [66] | January, 2000 | 3.10 MP | CCD | [67–69] |
Nikon COOLPIX 4500 [70] | May, 2002 | 4.13 MP | CCD | [71–75] |
Nikon COOLPIX 5400 [76] | May, 2003 | 5.26 MP | CCD | [77] |
Nikon COOLPIX 8400 [78] | September, 2004 | 8.00 MP | CCD | [79] |
Olympus SP-350 [80] | August, 2005 | 8.10 MP | CCD | [81,82] |
Canon EOS 5D [83] | August, 2005 | 12.80 MP | CMOS | [84,85] |
Nikon D80 [86] | August, 2006 | 10.20 MP | CCD | [87–89] |
Canon EOS 5D Mark II [90] | September, 2008 | 21.10 MP | CMOS | [91–93] |
Canon EOS 60D [94] | August, 2010 | 18.10 MP | CMOS | [95] |
Nikon D7000 [96] | September, 2010 | 16.20 MP | CMOS | [97] |
Nikon D5100 [98] | April, 2011 | 16.20 MP | CMOS | [99–101] |
Canon EOS 6D [102] | September, 2012 | 20.20 MP | CMOS | [58,103] |
Nikon D610 [104] | October, 2013 | 24.30 MP | CMOS | [105] |
Camera model | Release | Resolution | Sensor type | Studies |
---|---|---|---|---|
Nikon COOLPIX 950 [60] | February, 1999 | 1.92 MP | CCD | [61–63] |
Nikon COOLPIX 800 [64] | September, 1999 | 2.11 MP | CCD | [65] |
Nikon COOLPIX 990 [66] | January, 2000 | 3.10 MP | CCD | [67–69] |
Nikon COOLPIX 4500 [70] | May, 2002 | 4.13 MP | CCD | [71–75] |
Nikon COOLPIX 5400 [76] | May, 2003 | 5.26 MP | CCD | [77] |
Nikon COOLPIX 8400 [78] | September, 2004 | 8.00 MP | CCD | [79] |
Olympus SP-350 [80] | August, 2005 | 8.10 MP | CCD | [81,82] |
Canon EOS 5D [83] | August, 2005 | 12.80 MP | CMOS | [84,85] |
Nikon D80 [86] | August, 2006 | 10.20 MP | CCD | [87–89] |
Canon EOS 5D Mark II [90] | September, 2008 | 21.10 MP | CMOS | [91–93] |
Canon EOS 60D [94] | August, 2010 | 18.10 MP | CMOS | [95] |
Nikon D7000 [96] | September, 2010 | 16.20 MP | CMOS | [97] |
Nikon D5100 [98] | April, 2011 | 16.20 MP | CMOS | [99–101] |
Canon EOS 6D [102] | September, 2012 | 20.20 MP | CMOS | [58,103] |
Nikon D610 [104] | October, 2013 | 24.30 MP | CMOS | [105] |
To take photographs, a fisheye lens is installed on a camera. The most important characteristic of such a lens is its angle of view (AOV), also sometimes referred to as the angular field of view (FOV), which is defined as the angle subtended by the lens in the direction the photograph is taken, as illustrated in Fig. 3. To capture the entire hemisphere in front of the lens, the AOV should be at least 180 deg. The widest fisheye lens ever designed had an AOV of 270 deg [106]. When selecting the lens, care should be taken as not all lenses are compatible with every camera model. This is because, in addition to matching the mechanical fixture, the size of the image and the sensor should also be matched. If the image size is bigger than the sensor’s dimensions, an undesirable cropped photograph will be generated [107]. If the image is smaller than the sensor, image masking is required as an additional task, in which the unwanted area of the photograph is carefully removed [108]. The other important aspect is the trade-off between the camera resolution and AOV. The native resolution of the camera is spread over the entire view of the lens [109]. Hence, keeping the resolution constant, a wider AOV reduces the image detail and therefore, it is recommended to use high-resolution cameras for better quality photographs. The different lenses, along with their AOV, found during the literature survey are listed in Table 2, accompanied by the camera models and references to the relevant studies.
Fisheye lens | AOV | Camera model | Studies |
---|---|---|---|
Nikon FC-E8 [110] | 183 deg | Nikon COOLPIX 950 | [61–63] |
Nikon COOLPIX 800 | [65] | ||
Nikon COOLPIX 990 | [67–69] | ||
Nikon COOLPIX 4500 | [71–75] | ||
Olympus SP-350 | [81,82] | ||
Sigma [111,112] | 180 deg | Canon EOS 5D | [84,85] |
Canon EOS 6D | [103] | ||
Nikon D80 | [89] | ||
Cannon EOS 5D Mark II | [92] | ||
Nikon D7000 | [97] | ||
Nikon D610 | [105] | ||
Nikon D5100 | [100,101] | ||
Nikon FC-E9 [113] | 183 deg–190 deg | Nikon COOLPIX 5400 | [77] |
Nikon 8400 | [79] | ||
Nikon AF-S [114] | 180 deg | Nikon D80 | [87] |
Nikon AF DX [115] | 180 deg | Nikon D80 | [88] |
Raypro Pro HD [116] | 180 deg | Nikon D5100 | [99] |
Canon EF [117] | 180 deg | Canon EOS 6D | [58] |
Fisheye lens | AOV | Camera model | Studies |
---|---|---|---|
Nikon FC-E8 [110] | 183 deg | Nikon COOLPIX 950 | [61–63] |
Nikon COOLPIX 800 | [65] | ||
Nikon COOLPIX 990 | [67–69] | ||
Nikon COOLPIX 4500 | [71–75] | ||
Olympus SP-350 | [81,82] | ||
Sigma [111,112] | 180 deg | Canon EOS 5D | [84,85] |
Canon EOS 6D | [103] | ||
Nikon D80 | [89] | ||
Cannon EOS 5D Mark II | [92] | ||
Nikon D7000 | [97] | ||
Nikon D610 | [105] | ||
Nikon D5100 | [100,101] | ||
Nikon FC-E9 [113] | 183 deg–190 deg | Nikon COOLPIX 5400 | [77] |
Nikon 8400 | [79] | ||
Nikon AF-S [114] | 180 deg | Nikon D80 | [87] |
Nikon AF DX [115] | 180 deg | Nikon D80 | [88] |
Raypro Pro HD [116] | 180 deg | Nikon D5100 | [99] |
Canon EF [117] | 180 deg | Canon EOS 6D | [58] |
The security surveillance and sports industries have found some very useful applications of hemispherical photographing and filming [118]. This has led to the development of cameras with dedicated, fixed, and perfectly matching fisheye lenses. Recently, Rehman et al. [119] used such a camera (Podofo High-Definition camera [120]) to analyze the solar potential at parking machines. These machines were installed in a densely built city center.
Solmetric’s SunEye® [121] is another, more professional tool that is used for obtaining photographs to assess sites’ solar potential. It has a built-in image processor that instantly provides results and comes with an integrated digital camera mounted with a pre-calibrated fisheye lens, electronic inclinometer and compass, and a Global Positioning System (GPS). The tool has been used in several studies and real projects deployed in urban environments [122–124].
2.1.3 Smartphone Camera With Fisheye Lens.
In terms of resolution, smartphone cameras are no longer inferior to dedicated digital cameras and very recently, a smartphone with a 108 MP camera has been unveiled [125]. Additionally, smartphones comprise a small computer with a sophisticated operating system that offers a programming interface, access to the internet and cloud computing (hence effectively infinite storage space and virtually supercomputing capabilities), Bluetooth (remote) control, GPS, and a built-in digital compass and level sensors, which are either not available in digital cameras or require the purchase of additional accessories that are often very expensive [126,127]. Therefore, smartphones are capable of not only capturing and storing photographs but analyzing them in-field as well. However, similar to digital cameras, smartphones require fisheye lens attachments for capturing hemispherical photographs [128,129]. The major drawback in using smartphones is that the attachments and the lenses available are more generic than dedicated. Magnetic attachments offer better ability to adhere to the camera surface compared with clip-only attachments, but the sensor size inside does not necessarily match the size of image. This was reported by Carrasco-Hernandez [130], who used a Nokia E5 smartphone with a Sumlung SL-FE12 fisheye lens and found that it was not able to capture the entire hemispherical photograph in a single take. This was due to the internal CCD of the camera, which was cropping the image in the north-south direction. Hence, another photograph was taken after rotating the phone by 90 deg, and the two images were overlaid to generate a complete fisheye photograph. Parisi et al. [131] showed a good application of remote control of a smartphone camera via Bluetooth to ensure that there was no person in the view. Recently, Step Robotics launched a full suite for performing solar shading surveys, which includes a fisheye lens that can be mounted on iPhone 6 to X and all android phones, and an online project proposal development platform [132]. Experimental tests reported by experts at the University of Oregon have shown excellent agreement between measured and simulated performance [133]. The smartphone models and lenses, along with references to the studies they were used in, are chronologically listed in Table 3 according to their release date.
Smartphone model | Release | Camera resolution | Fisheye lens | AOV | Lens attachment | Studies |
---|---|---|---|---|---|---|
Nokia E5 [134] | August, 2010 | 5.0 MP | Sumlung SL-FE12 [135] | 180 deg | Magnetic | [130,136] |
Sony Xperia Z1 [137] | September, 2013 | 20.7 MP | Oldshark lens [138] | 235 deg | Clip-only | [131] |
iPhone 6 [139] | September, 2014 | 8.0 MP | Not mentioned | 235 deg | Not mentioned | [140] |
Smartphone model | Release | Camera resolution | Fisheye lens | AOV | Lens attachment | Studies |
---|---|---|---|---|---|---|
Nokia E5 [134] | August, 2010 | 5.0 MP | Sumlung SL-FE12 [135] | 180 deg | Magnetic | [130,136] |
Sony Xperia Z1 [137] | September, 2013 | 20.7 MP | Oldshark lens [138] | 235 deg | Clip-only | [131] |
iPhone 6 [139] | September, 2014 | 8.0 MP | Not mentioned | 235 deg | Not mentioned | [140] |
2.1.4 Drone Camera With Fisheye Lens.
Recently, survey methods using low-altitude unmanned aerial vehicles (UAVs or drones) have gained a lot of attention [141]. The chief advantage of drones is that they can go into locations that are difficult or beyond the physical reach of a human; for example, to a height of several floors on the site of a building that does not yet exist. Moreover, they can be programmed to perform automated surveying, which may not only reduce the manual workload but also increase the accuracy of observations [142]. Almost all professional quality drones are equipped with ordinary digital camera(s), and some of them also offer fisheye lens attachments (e.g., the Parrot Bebop Drone [143]). Also, a light-weight smartphone with a fisheye lens attachment can be loaded on such drones as almost all of them offer at least some payload carrying capacity [144]. Some training of the pilot prior to undertaking the survey is recommended as a drone crash may lead to the loss of expensive sensory equipment [145]. Care should be taken as in some countries, there are regulations related to drone flying; e.g., they might not be permitted in some areas and there could be some height restrictions as well [146].
The literature targeting the navigation of drones in urban landscapes has highlighted several challenges pertaining to the high probability of collisions with humans and built structures [147,148]. Technical challenges include limited space for the placement of devices on the drone’s body, functional aspects during the flight sequence, wind turbulence (speeds and directions) and limited height and time of flight [149]. Hence, the reported applications are very limited. Recently, Lee and Levermore [150] used a drone with a fisheye lens to conduct a solar access survey in an urban area; however, the model and specifications of the setup were not mentioned.
2.1.5 Camera with Spherical Mirror Arrangement.
For taking hemispherical photographs, the use of spherical mirrors has been less prominent compared with the use of lenses. These mirrors are much bigger than the lenses and to use them, the person or camera must be a given distance from the reflector, while ensuring that they are appropriately aligned with the sphere’s central axis. This makes the method more challenging and the device quite complicated and bulky. The other disadvantage of these devices is that since the person is in the middle of the surroundings and reflector, some part of the view in the photograph is obscured, eventually lessening the reliability of results. However, this problem can be overcome if a remotely controllable camera is used.
Solar Pathfinder [151] is a commercial tool that falls in this category. The diameter of the semi-transparent reflecting dome is roughly 6 in. and it is recommended to be viewed at a distance of 12–18 in. above it and within 10 deg–15 deg of the vertical centerline [152]. Care should be taken while using this device as viewing the solar glare in the dome with the naked eye may be dangerous. The device requires the placement of sun path diagrams, printed on paper, below the dome. Since these diagrams are location dependent, if the device is to be used outside the US latitude range (25 deg–49 deg), the manufacturers should be informed of the location to obtain the relevant diagrams [153]. Initially, the device was sold to be used manually and so the use of pencil markings and manual calculations were suggested. Currently, taking the photographs and analyzing them using the company’s custom-built software (sold separately) is also possible, though some manual analysis work is still required. Black [154] presented a financial analysis tool for solar projects that was based on performing site analysis using Solar Pathfinder. Abu-Rub et al. [155] used Solar Pathfinder for analyzing the solar potential for cladding commercial towers in Doha, Qatar with solar PV panels.
The Meteonorm Horicatcher tool [156] is another device that is used for taking hemispherical photographs, based on a similar idea. It consists of a spherical mirror, a digital camera and mounting device and dedicated software for analysis, all sold in a single package. Isabella et al. [157] made use of this device in proposing a comprehensive methodology for modeling and sizing solar PV systems in urban regions. Recently, various studies have shown its usefulness for developing methods for estimating solar potential in urban environments [158,159]. Another interesting application of this tool was presented by Mouli et al. [160], who used it for analyzing the solar potential at e-bike charging stations.
Duluk et al. [161] compared the Solar Pathfinder and Horicatcher tool in passive house planning in terms of their practicality, cost, and accuracy. The authors reported that the Horicatcher tool yields more accurate results than Solar Pathfinder.
2.1.6 Thermal Imaging Camera With Lens.
Thermal imaging cameras capture the infrared radiation (wavelengths between 0.78 and 1000 µm) emitted by surfaces with temperatures above 0 K [164]. Since the temperature is proportional to the radiation intensity, photographs depict the spatial distribution of temperature on an object’s surfaces [165]. These photographs are then used for purposes ranging from simple visual inspection to complicated processing and analysis.
Chapman et al. [166] argued that during the analysis stage, when the sky regions are to be separated from other regions (e.g., building surfaces and trees), the photographs taken by a thermal imaging camera could perform much better than those taken with an ordinary digital camera. They therefore attempted to use a custom-built “All Sky Thermal Infrared Camera” to analyze solar potential in urban canyons. It used an uncooled ferro-electric sensor with a spectral response between 8 and 12 µm. A 180 deg germanium AOV fisheye lens was mounted to ensure that the device would remain safe even when used under direct sunlight. The results showed a need to improve calibration, and the device performed well only in the absence of clouds.
2.2 Indirect Methods.
In the indirect methods, several photographs of the surroundings are captured from a single point using an ordinary camera lens and are then stitched together to generate a hemispherical photograph.
2.2.1 Panorama Conversion.
Maskarenj et al. [167] used the Google Street View (GSV) application [168] in an iPhone 5S smartphone [169] with an 8 MP main camera to capture and stitch 72 photographs, taken at different angles. The panorama was then transformed into a hemispherical image using the Polar-Coordinate Distortion Filter in Adobe Photoshop [170]. Although this method avoided using a fisheye lens at all, taking several photographs to yield a single hemispherical photograph was time consuming. Moreover, care should be taken during image capturing as misaligned photographs may cause registration errors during stitching, resulting in unwanted seams in the final images.
2.2.2 Street View Datasets Conversion.
Recently, a new approach, in which the existing datasets of rectangular photographs are accessed and stitched to form hemispherical photographs, has gained a lot of popularity. The dataset is accessed through available Application Programming Interfaces (APIs) and the stitching is performed using readymade or custom-built programs. This approach has offered an excellent way of performing assessments and should be the cheapest of the available methods as no device and/or lens is required at all.
According to the literature survey, GSV [171] is the most-used dataset due to its coverage in more than 80 countries [172]. These images are recorded using specially developed cameras (a setup comprising eight cameras installed in a rosette configuration), mounted on different types of vehicles that are suitable for the routes (such as cars, vans, tricycles, and snowmobiles) [173]. The company has hired thousands of drivers to capture these photographs around the world. A sophisticated technology has been used to stitch them together and tie them to the latitude and longitude they were taken at. The photographs can be accessed through static APIs, which require information about location (latitude, longitude or the textual address), compass directions, FOV, pitch, and the size (height and width) of the required image. So, from a single desired point, the photographs in all directions can be downloaded and then stitched to generate a hemispherical photograph. Due to the restricted access of Google in China, the GSV does not fully cover Chinese roads and thus two other datasets, Tencent Street View (TSV) [174] and Baidu Street View [175], are used for applications in China. Of these, TSV has the largest coverage to date [176].
However, caution should be applied as the datasets are not frequently updated and any newly built structures and temporal variations in tree canopies may limit the reliability and quality of results. Further, since the photographs are captured from the road at roughly eye level, this approach has a serious limitation as it cannot be employed at all desirable locations, such as building rooftops, tops of street poles, parking lots, and sidewalks. Even at the road, the photographs are captured from some distance, so assessment of the roads at very high spatial resolution is also not possible.
Liang et al. [177] presented a comprehensive automated methodology for downloading and stitching photographs, taken from all three of the mentioned datasets. The authors recommended that care should be taken during analysis as lanes and small alleys could significantly reduce the quality of the results. Gong et al. [178,179] and Li and Ratti [180,181] used GSV and presented a simple method for generating hemispherical photographs. In contrast, Liu et al. [182] also used GSV but used the PTGui package for producing hemispherical photographs [183]. Table 4 summarizes the list of datasets along with the locations and studies they were used in.
3 Applications
3.1 Solar Energy.
The solar radiation at a site with an unobstructed sky that can be received during a given time period can be obtained using mathematical relationships [186], databases [187], and solar maps [188]. However, assessing the solar potential of sites with obscured sky views, which are very common in urban settings, is a complicated task [189]. Solar radiation comprises direct, diffuse, and reflected components. The direct (also known as beam) radiation component comes straight from the direction of the sun and creates sharp shadows if it is obstructed [190]. In an overcast sky, the diffuse radiation component consists of only the sky radiation subcomponent, which is generally assumed as isotropic, i.e., having a constant magnitude in all directions. The sky radiation component depends completely on the fraction of the sky the surface can view, mathematically represented by the term Sky View Factor (SVF). So, if the surface is horizontal and has an unobstructed view of the sky, it will be seeing the complete sky (SVF = 1.0). However, if it is tilted and/or has an obscured sky, it will obviously be seeing less sky (SVF < 1.0). A vertical surface with unobstructed sky has an SVF of 0.5. In clear sky, the diffuse radiation component also includes a circumsolar radiation subcomponent, which comes from the sky around the sun [191]. Once again, as the circumsolar radiation relies upon the location of the sun, it does not reach the surface if there is any obstruction in its path. The diffuse radiation component may also include the horizon brightening subcomponent, which is intense in a near-horizon band and is more prominent during clear skies [192]. However, a dense urban landscape, hiding the horizon belt around the surface, eventually stops the collection of this subcomponent. The reflected radiation component is received from the ground, if the surface is tilted, and from the natural and urban objects in the surrounding.
Whole-sky photographs have played a significant role in assessing solar radiation potential (and its individual components), since they provide information about the obstructions in the surroundings of the site. A general methodology for determining the direct and circumsolar radiation potential is to identify the obstructions in the path of the sun. If the sun is blocked by an obstruction, the radiation will not be reaching the site from where the photograph is taken. However, for the diffuse solar radiation potential, the SVF is determined by analyzing the photographs. In Sec. 3.1.1, first a review of methods for evaluating the SVF using photographs is presented. Then, the literature on methods and case studies for estimating solar radiation potential is reviewed.
3.1.1 Sky View Factor.
Initially, a manual method for assessing the SVF was proposed by Steyn [193], in which printed photographs were divided into 37 concentric annuli. Then, each annulus was measured to calculate the total of the sky arcs within them. The SVF was the product of the sum of these and a weight factor. In order to reduce the manual work, Bärring et al. [194] digitized the photographs using a video camera and masked the sky pixels by increasing the grey tones using an image processing system. Later, Steyn et al. [195] utilized a video camera fitted with a fisheye lens to directly capture the scene and processed the image using a computer. However, difficulties were encountered in separating the sky pixels from pixels depicting sunlit walls. Holmer [54] developed a method based on digitizing photographs using a tablet connected to a computer and analyzing them via a computer program, built in basic for IBM-compatible computers. Since the calculation time was quite independent of the number of annuli, higher precision in the results was achieved by considering more annuli than were reported in previous work.
Chapman et al. [166] argued that, despite some advanced image processing techniques based on color and contrast thresholding (e.g., Refs. [196,197]), one of the major challenges in determining SVF using hemispherical photography is to distinguish between pixels of sky and obstructions. The use of thermal cameras was proposed for this purpose, which enabled separation based on temperatures, i.e., surfaces being warm and the sky being a cold space. Another advantage of this technique was its applicability during both day and night. However, the method was unable to separate out the clouds; this required improvement in the algorithms or some-post processing, which was not demonstrated.
Hämmerle et al. [72] evaluated the SVF for an urban view taken at Szeged (south Hungary) using the RayMan program [198], which required a hemispherical photograph and manual setting of the color thresholds. Deviations in the results were found when they were compared with other nonphotographic methods (e.g., SkyHelios [199] and SOLWEIG [200] which depend on 3D numerical building data) which were corrected after applying the appropriate weights to the obstructions depending on their zenith angles.
Freitas et al. [81] proposed an obstruction surveying method for PV applications and considered the SVF calculated from photographs using SkyViewFactor Calculator [201] as a standard reference for comparing the results of other nonphotographic methods (e.g., the satellite visibility and signal intensity method, and raytracing in the airborne Light Detection And Ranging (LiDAR) data collection method). The study was conducted in the Faculty of Science of the University of Lisbon, Lisbon (Portugal).
Liang et al. [177] calculated the SVF using photographs generated from street view data and compared it with results obtained from a nonphotographic method (e.g., ray tracing of Digital Surface Model generated at 1 m resolution and 3D City Model generated at less than 1 m resolution). The results were found to be in good agreement; however, further research in terms of processing the large amount of data at an urban scale was recommended.
Ramírez-Faz et al. [163] proposed a mathematical model based on Moon–Spencer’s model to evaluate SVF incorporating the angular distribution of diffuse radiance and used photographs for verification purposes.
Parisi et al. [131] used a smartphone with a fisheye lens to evaluate the SVF from photographs using the SkyViewFactor Calculator and determined the solar ultraviolet protection factor of built structures of various sizes in Toowoomba, Queensland (Australia).
Xia et al. [185] developed a semantic segmentation processing-based algorithm for estimating the SVF from street-view data. Panoramic images were obtained through GSV API and were transformed to fisheye images. The study was conducted for a residential area adjacent to the Suita campus of Osaka University (Japan).
Recently, Liang et al. [184] argued that obtaining SVF from street-view data requires many complicated processes such as image transformations, use of machine learning, big image data processing, and the use of Geographic Information Systems (GIS). Therefore, they developed a GIS-integrated SVF calculation tool named “GSV2SVF”.1 The tool offers batch processing of large numbers of panoramic samples; it is open source and is free to use. The experiments were conducted at several urban locations in the US.
3.1.2 Solar Radiation Potential.
Tomori et al. [202] presented a manual method for estimating the fraction, termed as shading fraction, of the irradiation that are obstructed by the hindrances in surrounding using the photograph. The fraction was suggested to be used for determining the yield of the building integrated PV panels.
Yoon et al. [91] used SVF evaluated from a photographic method for estimating the solar radiation on inclined surfaces at hourly intervals. The raster image processing program, IDRISI [203], was used for the purpose. The case study was performed at the Korea University of Seoul (Korea).
Arboit and Betman [77] presented a case study of the Mendoza Metropolitan Area (Argentina) where the solar radiation potential was assessed for a forested urban environment. The openness and the sun paths in vertical photographs taken at different sites were compared with the irradiance values registered using portable irradiance measuring equipment. The effects of street width, trees, building heights, and morphology were discussed.
Pretlove and Osborne [204] compared the UK government’s Standard Assessment Procedures (SAP), used for predicting PV yield, with real-time solar radiation data collected for buildings in South West London. Since the guidelines for setting overshading factors were vaguely defined in the SAP, the authors used a hemispherical photograph overlaid with the sun path diagram to manually calculate the percentage of the sun path obscured by the surrounding obstructions.
Carrasco-Hernandez [130] and Carrasco-Hernandez et al. [136] obtained the anisotropic angular distribution of diffuse irradiance by using an algorithm written in Octave [205], which requires the shading patterns of the sky from photographs. Direct and isotropic diffuse radiations were obtained by analyzing photographs, synthetically generated using street view datasets, via the RayMan program. The case study was conducted in Dover Street, at the University of Manchester (UK).
Lee and Levermore [150] used a drone fitted with a fisheye lens for evaluating the solar radiation potential by determining the SVF and the sunshine factor (the ratio between the yearly sunshine hours available at a site with obstructions and without obstructions). The case study was performed in Ulsan (South Korea).
Gilles et al. [75] found photographs were a useful tool for validating the results of their model, proposed for determining the solar radiation potential on building surfaces within built environments. Their model was based on data obtained from LiDAR surveys and the case study was performed in Geneva (Switzerland).
Lai et al. [65] developed correlations through regression analysis for estimating radiant fluxes for densely built environments. The correlation was based on the SVF, the Sunlit factor (defined as the fraction of sunlight that is visible from a point on the building’s facade), and the green view factor (defined as the fraction of the greenery area that is visible from a point located within a built environment).These factors were determined by image processing of photographs. The case study was performed in Hong Kong.
Freitas [82] also used photographs to validate a model proposed for evaluating the PV potential of buildings’ facades using a mobile camera device mounted with a fisheye lens. The case study was performed in Lisbon (Portugal).
Calcabrini et al. [159] presented a model for evaluating the solar energy potential of built environments based on the shape of the skyline profile, which was obtained by analyzing photographs. The case study was performed in Delft (Netherland).
Rehman et al. [119] conducted a solar potential assessment of solar-operated parking machines. These modern machines are now commonly found in urban environments. The obscured portions of the sky and the hindrances in the sun path were determined by analyzing photographs. The case study was performed in Auckland (New Zealand).
Gong et al. [179] used GSV to generate photographs, which were processed to determine the direct, diffuse, and total solar radiation as well as their spatial and temporal patterns in Kowloon and Hong Kong.
Li and Ratti [180,181] presented a stitching method for generating photographs using the GSV database. These photographs were then analyzed to calculate the spatial and temporal distribution of solar radiation at street level in Boston, Massachusetts (USA).
Liu et al. [182] also used the GSV database but employed PTGui software [183] for stitching the individual images to generate the photographs. These photographs were analyzed to evaluate the feasibility of PV roads in Boston, Massachusetts (USA).
Oliveira Panão et al. [140] proposed a method to determine the solar radiation potential of an urban site by using the shading correction factor (defined as the fraction of solar radiation on the surface received in the presence of external obstacles compared with when it is received in their absence). An image processing technique based on the color thresholds was applied to photographs captured using a built-in smartphone camera equipped with a fisheye lens. The case study was performed in Lisbon (Portugal).
3.2 Urban Heat Island and Outdoor Thermal Comfort.
For a long time, the phenomenon of urban heat islands has been considered a well-accepted fact that provides very strong and widely documented evidence of human modification of the atmospheric environment [206]. Researchers have been struggling to relate urban heat intensity (UHI, defined as the temperature difference between rural and urban regions) to parameters such as population size and growth, urban density and morphology, and the thermal and radiative properties of outdoor surfaces [207]. Likewise, the presence of trees in urban areas has been recognized as an ecosystem service benefitting the population [208]. Some of the benefits of urban trees include temperature reduction through shading and evapotranspiration [209] and improvement in air quality through absorption of gaseous pollutants and interception of particles by plant surfaces [210]. Photographs have played a noteworthy role in driving forward the research in these areas.
Krüger et al. [71] used SVF as an indicator of local temperatures and the comfort levels of pedestrians during the daytime in streets in a dense urban environment, located in Curitiba (Brazil). The SVF was calculated from photographs, taken at several sites, using the RayMan program.
Balogun et al. [61] observed urban heat island effects and studied their characteristics in Akure (Nigeria). The author employed Chapman et al.’s model [211] to the photographs to assess the SVF at different sites varying in the openness of sky. The results showed a strong influence of urban aspects on the UHI and the authors concluded that nocturnal UHI was more frequent than daytime UHI.
Tan et al. [87], while working in a tropical urban environment in a residential area in the eastern Central Business District (CBD) of Singapore, developed a correlation between the SVF and the mean radiant temperature outdoors. The RayMan program was deployed for evaluating the SVF directly from the photographs.
The temporal deviation of the temperature of urban surfaces was studied by Yang [73], deploying a ray tracing algorithm on the photographs, captured in Kowloon Peninsula (Hong Kong).
Maleki et al. [79] reported that the microclimatic conditions at different locations vary considerably and found a strong relationship between the UHI and urban density. The SVF was used to quantify sky openness and the photographs were used for the purpose. The study was performed in Vienna (Austria).
Yan et al. [92] analyzed the bivariate relationship between the temperature of the air and several parameters associated with urban morphology. These parameters included SVF, percentage vegetation cover, percentage of building area, distance to parks, and distance to bodies of water. A simple linear regression was employed for the purpose. In this study, the SVF was calculated by using the RayMan program, which analyzed photographs taken at different urban locations in Beijing (China).
Balogun and Balogun [62] studied the diurnal, monthly, and seasonal variation of bio-climatological aspects in Akure (Nigeria). This tropical city has a hot and humid climate. They correlated the SVF, estimated from the photographs using Chapman et al.’s model [211]. The results indicated that city centers are subject to significant heat stress and are therefore prone to health risks.
For the Klang Valley (Kuala Lumpur), Toh et al. [212] presented a field observation on temperature variations on the street with roadside trees. The SVF, determined by analyzing the photographs, was used as a tool for classifying the sites as open, sparse, and dense.
Cheung et al. [63] derived an empirical correlation for the maximum UHI as a function of SVF. For the purpose, photographs of 59 sites were captured and analyzed using a numerical method, programmed in matlab [213].
Xue and Lau [74] developed interrelations between site configuration as depicted by SVF and microclimate comfortability, and the perceived personal evaluation of the urban locations in Hong Kong and Singapore. The SVF was evaluated using the WinSCANOPY program [214], which used photographs taken at various locations.
Yilmaz et al. [99] developed a relationship between SVF, temperature, and humidity, to determine the thermal comfort levels for the Erzurum City Center, Erzurum (Turkey).
Jusuf et al. [88] proposed a range of mathematical models for determining maximum, minimum, average, daytime average, and nighttime average temperatures for urban settings in Singapore. The SVF was used as the parameter defining urban morphology. While statistically deriving the relations, the SVF was evaluated directly by analyzing the photographs.
Gaxiola Camacho [105] discussed the strategy for mitigating the UHI in arid regions by properly integrating the urban agriculture in built environments. A custom program constructed at the University of Arizona, named OUTDOOR, was used for assessing human thermal comfort, which requires the SVF, obtained from analyzing the photographs and meteorological data. Image editing programs like Photoshop [215], ImageJ [216], and Microsoft Photo Editor [217] were also used for manually adding/subtracting vegetation in the photographs. The case study was performed in the city of Tucson, AZ.
The air temperature at which the heat budget of the human body in typical indoor settings is balanced with outdoor thermal conditions is known as the physiological equivalent temperature (PET). It has a direct relationship with human comfort [218]. Crewe et al. [219] employed SVF, estimated using photographs via RayMan program, in developing a relation for the PET, for the urban site in Tempe, AZ.
Takebayashi et al. [68] studied the effects of trees shading open spaces around buildings on the development of microclimatic conditions and pedestrian comfort. The case study was performed on the buildings near the station in Central Osaka. The site observations were recorded via photographs and the analysis of shading was performed by overlaying the sun path diagrams on them.
An interesting study of the connection between visual comfort and human comfort in outdoor environments under shaded and unshaded conditions was performed by Lam and Hang [97] at the Royal Botanic Garden, Melbourne (Australia). The SVF, evaluated using the RayMan program used on the photographs, was used as an indicator of the openness of the site.
Sosa et al. [220] evaluated the influence of various designs of social housing on summertime outdoor air temperatures and the requirements for energy for cooling purposes. The SVF was used as a key factor in classifying the built designs and was determined by analyzing photographs using the Pixel de Cielo 1.0 program. This study could be used to enhance the efficiency of energy utilization by providing technical design recommendations during the planning stages for social housing localities in cities with hot and dry climates.
Middel and Krayenhoff [58] reported several micrometeorological determinants of pedestrian thermal exposure that included the openness of built areas, as indicated through SVF, which was quantified by processing the photographs. The case study was performed in Arizona State University's main campus in Tempe, AZ.
Othman et al. [221] determined the effects of SVF on ambient temperature, mean radiant temperature, and PET and presented a case study for sites in Universiti Teknologi Malaysia (Kuala Lumpur). The SVF was determined using the RayMan program.
Klingberg et al. [100] explored a method of evaluating the urban Leaf Area Index (LAI) using an aerial LiDAR dataset. The LAI is defined as the ratio between total one-sided green leaf area and unit ground surface area [222]. The LAI is an important ecological characteristic that influences the urban climate. Photographs taken from the ground were used to validate the accuracy of the method, which was found to be in good agreement. The experiments and the simulations were performed for the Gothenburg municipality (Sweden).
Konarska et al. [101] used photographs for evaluating the SVF and LAI. The method involved processing near-infrared photographs that were taken with a digital camera mounted with a special filter lens. The image processing for identifying sky, trees, and buildings was performed in matlab. The experiments were performed in Gothenburg (Sweden).
Osmond [69] described these photographs as a useful tool for designing and evaluating urban sustainability. For the purpose, a streamlined methodology was developed to evaluate the SVF, LAI, and the visual diversity of the site. These parameters were combined to assess the comparative environmental performance and physical ambience of the site. The experiments were performed in Sydney (Australia).
Ong [223] proposed a new architectural and planning metric for greenery in cities and buildings, named the Green Plot Ratio (GPR). This metric is based on the average LAI of the area, which was calculated by processing photographs.
3.3 Indoor and Outdoor Daylighting.
Significant energy consumption in urban settings comes from commercial and residential buildings. An appropriate building design that maximizes indoor daylight could help to substantially reduce net energy demands by affecting space lighting and heating/cooling loads [224]. Analyzing both indoor and outdoor daylight is important as it affects public health as well as plant growth. Photographs have been a key tool role in analyzing and optimizing building designs and advancing this area of research.
Inanici [84] used a computer graphic technique, known as high-density ranging (HDR) [225], in which several photographs with varying exposure levels are taken to estimate the wide variation in the luminance. The photographs are fused together into a single image and the pixels’ value quantifies the luminance value. A digital camera with a fisheye detachable lens mounted on it was used for capturing these photographs. The author presented the simulation of architecture hall at the University of Washington in Seattle (USA) using Radiance Lighting Simulation and Visualization System (RLSVS) [226].
For studying solar access and the natural daylight availability in buildings through vertical windows, Ramírez et al. [162] proposed a novel synthetic projection that overcame the limitations associated with existing projections. The photographs taken were transformed into the projection using a specialized program developed in matlab. The case study was performed in the Rabanales Campus at the University of Cordoba (Spain).
Transparent luminescent solar concentrators (LSCs) could possibly be used as a mean of having window glass combined with power generation features in the urban environment. However, they have bright fluorescent coloration. Vossen et al. [95] studied their effects on comfort associated with the visual appearance and the light levels in an office building, located in Eindhoven University of Technology, Eindhoven (Netherlands). The influence of glare was measured by analyzing vertical hemispherical photographs, which were taken using a fisheye lens mounted on the digital camera.
Knera and Heim [89] used photographs mimicking the light source for determining the lighting conditions in an office room. The HDR technique combined with the RLSVS was used for simulation purposes. The case study was performed at the Lodz University of Technology (Poland).
Maskarenj et al. [167] used photographs for validating their low-cost prototype, proposed for determining angular outdoor daylight luminance distribution based on light-dependent resistors. The photographs were generated by stitching the images available in the GSV database using Adobe Photoshop [170]. The case study was performed in Bombay (India).
Chaiyakul [67] performed field experiments in the urban streets in Bangkok for accessing outdoor daylight performance. The photographs were obtained using a digital camera mounted with a fisheye lens. These photographs were analyzed to evaluate the sky illuminance entering from different directions.
3.4 Air Pollution.
In urban areas, air pollution is caused by the interaction between natural and man-made environmental conditions [227]. It is a serious problem, in both under-developed and developing countries. A few studies have reported the use of hemispherical photographs for the purpose. For example, Dursun and Yavas [228] claimed that a deep street canyon forms and the blocking buildings stop the wind flow and block the sunlight, which eventually gives rise to air pollution. The authors quantified the openness of streets by calculating SVF from photographs. And, while studying the air quality around the viaduct of an elevated highway, Joerger and Pryor [229] also used photographs to evaluate the SVF, which were used as a way to describe the clear sky view. This information yielded the extent to which emissions could be entrained into the street from the elevated highway.
3.5 Light Pollution.
Light pollution in urban regions has been identified as a serious problem, not just for astronomy, but also because of its negative impacts on environment and health (such as sleep disorders, obesity, diabetes, and cancer) [230]. Once again, only a few studies have reported the use of hemispherical photographs in the field of light pollution assessment.
Zotti [85] used the HDR technique to measure light pollution, applied to whole-sky photographs. These photographs were captured through a custom-built sky-dome capturing system, comprising a digital camera installed with a fisheye lens connected to a Notebook via a USB connection. The Notebook had a remote-controlled application, which was programmed to run the capturing sequence every few minutes, analyze the results, and execute more runs if required.
Jechow [103] reported light pollution measurements during Earth Hour by applying differential photometry. For the purpose, the photographs were taken from a digital camera mounted with a fisheye lens and were processed using a commercial program “Sky Quality Camera” (SQC) [231]. The case study was executed in an urban park, named Tiergarten, in Berlin (Germany).
Wallner [93] used vertical photographs to measure urban light pollution in small urban areas. A digital camera with a fisheye lens was used for capturing the photographs and the analysis was performed in SQC. The work also showed examples of different lighting situations, such as different types of lamps and luminaires, illuminated objects such as billboards and buildings and the effects of transition to LED technology. The case study was performed in the city of Eisenstadt (Austria).
4 Potential Research Directions
4.1 Advancement in Image Processing.
While analyzing photographs, image processing is primarily applied to recognize the class of pixel (e.g., sky or obstruction). However, the task becomes complicated when the sky is cloudy or when there are surfaces with the same color as sky/clouds. In addition, modern city centers have buildings with reflecting facades, which makes the scenario more difficult. The advanced applications may also include marking or classifying the objects for different purposes. According to Bour et al. [232], most of the existing image processing methods are based on color thresholds and they are claimed to result in poor accuracy in complex situations. The new methods should involve modern algorithms such as segmentation [233] and feature extraction [234]. However, these algorithms require huge databases for training, massive computational resources and prolonged processing times. Hence, a great amount of research is still needed to fulfil the research gaps in this area.
4.2 Wide and Efficient use of Thermal Imaging.
The use of thermal imaging cameras with fisheye lenses was found to be limited in studies related to sky view only. Its applications for the wider spectrum of applications for the urban energy and environment assessments are yet to be explored.
4.3 Solar Potential Assessment for Solar-Powered Vehicles.
Street-view-based solar potential assessment could be a great tool for analyzing the feasibility of solar-powered electric and hybrid vehicles. Such a study at a city scale would be able answer questions such as: “What if someone replaces fossil-fuel powered cars with solar-powered cars?; or “What if government plans to add a fleet of solar-powered public buses for inner city travel?” However, to the author’s knowledge, no work has been reported in this direction.
4.4 More Applications of Drones based Surveys.
It must be noted that most of the application studies were conducted at near-ground level and hence, they do not report results at elevations, such as at balconies and near windows that are at some height. Drones with a camera mounted with a fisheye lens could serve the purpose. Once again, to the author’s knowledge, no such study has been reported in the literature.
4.5 Hemispherical Videos.
One of the drawbacks of still photography is that a number of photographs at a potentially infinite number of locations are required for detailed assessment, which is not practical. However, a video recording using a a fisheye lens could help in performing surveys for daylight assessments and quantifying air and light pollution.
5 Summary
A comprehensive review of the acquisition methods and applications of the hemispherical photographs, in the context of urban energy and environment assessments, has been presented.
Direct methods, which include film, digital, smartphone and drone cameras, and cameras mounted above a spherical mirror as well as thermal imaging technology, have been discussed in detail. Indirect methods, in which the stitching is performed for generating the hemispherical photographs from the rectangular photographs obtained from ordinary acquisition methods without any lens or from the publically available street view datasets, were also discussed in the light of current literature. These methods were critically compared in terms of their advantages, limitations, and associated challenges.
The range of applications of hemispherical photographs for built environments, reviewed in this work, include deriving SVF, assessing solar energy potential, urban heat islands and thermal comfort, indoor and outdoor daylight, and air and light pollution.
Some potential research directions, in the light of urban applications using hemispherical photographs, which are either yet to be explored or require considerable amount work, were also discussed. These include advancement in image processing, use of thermal imaging, solar potential assessment of solar-powered vehicles, applications of drone-mounted hemispherical photography, and fisheye videos.
Footnote
Conflict of Interest
There are no conflicts of interest. This article does not include research in which human participants were involved. Informed consent not applicable. This article does not include any research in which animal participants were involved.
Data Availability Statement
The datasets generated and supporting the findings of this article are obtainable from the corresponding author upon reasonable request. The authors attest that all data for this study are included in the paper. No data, models, or code were generated or used for this paper.