Lead Line and Sounding Pole
A sounding pole is the simple way of measuring depths - a marked pole that is put over the side of the boat. But it is obviously limited to being used in very shallow waters.
A lead line is simply a weight (often of lead) at the end of a line marked at intervals. The line is coiled and thrown over the side and the line let out until the weight hits the bottom, at which point the depth is read off the line by the leadsman. He then coils the line in again and repeats the process. The lead line has been used by navigators since at least the time of the Ancient Egyptians (starting around 3400BC), right through to the 20th Century. This was done using a simple coiled rope until the 19th Century, when mechanical aids were introduced to make the process easier and the results more consistent. Even with mechanical assistance, taking a single sounding in 1000 meters is a major undertaking, let alone a whole row of soundings. Yet in the 19th Century they were measuring the depths of the Atlantic Ocean, necessary for the laying of the first transatlantic cables.
Hydrographic survey by lead line.
Massey lead line surveying machine
Because of the time and effort needed to use a lead line, soundings were always sparse - generally several miles apart unless a detailed survey was carried out in coastal waters, which was a very time consuming process. Obviously this gives a limited view of the sea bed, but most nautical charts in use today still include some lead line soundings on them.
Hydrographic survey using a drag wire
For safe navigation, all that is needed is the minimum depth, so at the beginning of the 19th century drag wires began to be used in coastal waters. A drag wire is strung between two vessels, with supporting buoys in between, to be at a known depth. If this can be taken safely along a channel, then it is known that the minimum safe depth for navigation is the depth of the drag wire.
Single Beam Echo Sounders
At the beginning of the 20th Century, a number of people were working on using sound to measure depth. There are many claimants for who first thought of the idea, and who produced the first working systems, but these started coming into practical use in the 1930s. These typically displayed the depth on a dial, or on a paper trace, and needed to be manually matched to the position of the vessel.
The sounder works by sending a ping of sound out in a cone, and timing its refection back to the vessel. Knowing the speed of sound, the depth can then be calculated. Cone angles currently vary from about 10º for general navigation use, down to ¼º for narrow beam survey grade sounders. Obviously a narrower beam gives higher resolution, but also the vessel must make more tracks to fully cover the sea bed. As an example, at 100m depth a ¼º beam covers a circle with 1m diameter, whereas a 10º beam has a 34m diameter, so beam angle becomes increasingly important as depths increase. Simple processing of the signal gives a depth measurement, but more advanced processing methods can also give information on the nature of the sea bed and objects in the water column.
The most common frequency in coastal waters is 200kHz. Lower frequencies reach deeper depths, and also allow better resolution between fish, seaweed, rock and mud, but are lower resolution and have wider beam angles.
The speed of sound in sea water is about 1500m/s, but varies with the density, salinity and temperature of the water, and this can affect the depth by as much as ±6%. The speed of sound can be measured as part of the hydrographic survey or, as is done with TeamSurv, atlases of the mean speed of sound against location and time of year are used.
Apart from automating the onerous task of swinging the lead, the major breakthrough of echo sounding is that it gave a continuous line of closely spaced soundings. The spacing between each row varied, but a typical value is 10% of the chart scale. So a relatively detailed chart giving the approaches to a harbour at a scale of 1:10,000 may have survey lines spaced as far as 1000m apart.Even close inshore, survey lines may be 200 - 300m apart.
Multibeam Echo Sounders
In the 1970s the next breakthrough was the introduction of multi-beam sounders. These use multiple beams perpendicular to the survey vessel's centreline, so are the first systems to achieve full bottom coverage. Their beams may extend 60º each side of the vertical, so even allowing for some overlap between survey lines, a survey line will typically cover 3 times the depth of water. The resolution is high enough to produce a 0.5m or 1m grid in coastal waters.Vertically, in shallow waters the accuracy may be as good as a few centimetres, and about 0.2% of depth in deep water.
With the beams going so far off the vertical, the effects of ship motions are significant, so in addition to the multibeam system itself an inertial measurement unit is needed to correct for motions. Also, variations in the speed of sound become more critical, both due to the longer distances of the outlying beams, and refraction due to variations in the speed of sound.
As the depth of water decreases, the width of each swath decreases, and so more survey lines are needed to give full bottom coverage. At some point the decision may be made to use a lower cost single beam survey instead.
The drawback to this high level of coverage and accuracy is cost. This can vary significantly due to factors such as the water depth, size of survey area, ease of access of the area to the survey vessel, and time lost due to bad weather, but costs of £3000 - £4000 per square mile are often given for carrying out a survey in coastal waters and processing the data.
Lidar (light detection and ranging) technology measures elevation or depth by analyzing the reflection of pulses of laser light off an object. Lidar survey systems are typically mounted on an aircraft or drone, and provide seamless, contiguous coverage between land and sea. By using terrestrial Lidar at low tide, underwater depths can be measured. Typically it produces a 1-2m grid with a swath of up to 1000m, with a vertical accuracy of 0.15m and a horizontal accuracy of 0.6m.
Bathymetric lidar is used to determine water depth by measuring the time delay between the transmission of a pulse and its return signal. Systems use laser pulses received at two frequencies: a lower frequency infrared pulse is reflected off the sea surface, while a higher frequency green laser penetrates through the water column and reflects off the bottom. Analyses of these two distinct pulses are used to establish water depths and shoreline elevations. With good water clarity, these systems can cover depths of 0.5 to 50 meters, though 30m is often a more realistic upper limit. Results are typically produced on a grid of between 2m and 5m resolution, with a 50 - 400m wide swath. Vertical accuracy is typically to 50cm, and horizontal accuracy 5m. It suffers when reflections are poor, e.g. when there is rough water, or a dark, rough or otherwise poorly reflecting sea bed.
Satellite Derived Bathymetry
Satellite derived bathymetry (SDB) is actually an umbrella term for a number of very different approaches to producing bathymetric data, which only have in common their dependence on satellite data. For more information on the various types of satellite derived bathymetry, and how it ties in with our crowd sourced bathymetry, have a look at the BASE Platform project.
Optical SDB was first developed in the 70s, but has improved with better algorithms and imagery. It works by using light reflected off the sea bed and so, like Lidar, is at best when there is smooth, clear water and a smooth, white sea bed, with no clouds to block the light. Depths can be up to 25m, though 10-15m is more typical. Resolution varies from free Worldview images with 30m pixels, through to Landsat 3 at 1.25m, and a horizontal error of 5m. Errors are typically given as 10 - 15% of the depth or 1-2m, but significant errors can arise in deeper waters, at or beyond the limit of the optical SDB system. Accuracy is improved by ground truth data, allowing the lgorithm to be tuned to a specific location. A guideline cost is £450 per square mile.
Satellite altimetry can measure the height of the sea's surface to a couple of centimeters. Take away the waves, and the sea's surface is not flat, but has bumps and hollows that correspond to the underlying land mass. Where there is an underwater mountain, the higher mass of the rock means that the gravitational field is locally stronger at that point, so more water is attracted to the area and there is a bulge in the sea's surface. This method is the main one used for bathymetric maps of the oceans, generally modified by survey ship data where available, so you can see it in products like GEBCO and Google Ocean. Most of this data is produced by the scientific community, and may either be open data, or there may be restrictions on its commercial use, depending on the provider.
Whilst it offers global coverage, its limitations must also be borne in mind. Depending on the quality of altimetry data in an area, the resolution will be somewhere between an 8km and an 80km grid, so all but the largest sea mounts are likely to escape detection. In each grid square, the mean depth is provided to an accuracy of 150 - 250m compared to the actual mean depth, but obviously in an 80km square there can be considerable variations in depth if the terrain is rugged, so depth errors at discrete points can be many times greater than this. Also, it cannot be used close inshore, as the mass of dry land will affect the results.
SAR, or Synthetic Aperture Radar, offers two ways of determining depths, both of which are in the R&D stage rather than mainstream techniques.
First, where there are waves on the surface of the sea, the wavelength is a function of depth, and the waves refract as the depth of the sea bed changes. Thus by selecting images with clearly defined wave patterns the depth can be calculated. This works from when the water gets shallow enough to affect the wavelength, until entering the surf zone.
Secondly, in very shallow shelving areas where there is a significant tidal range, the land/water boundary can easily be identified in the SAR images. If the state of the tide is also known, this can then give water depths along the water's edge at that time, so from a number of images taken at different states of the tide the full depth model can be built up.
Being able to measure the depth is only half of the problem - it is also needed to know the measurement was taken. This may be relative to landmarks on the shore, or it may be absolute, e.g. latitude and longitude from GPS.
Whilst the Chinese new about the compass as early as 200BC, it wasn't used for marine navigation until the beginning of the 12th Century, and was also used in Europe by the end of that century - it isn't clear whether it was invented separately, or whether it was brought to Europe by Arab traders. It enabled depths to be positioned fairly accurately relative to landmarks, and so enabled their plotting on a chart.
For positioning outside of the sight of land, whilst latitude could be determined by sighting of celestial bodies, for longitude an accurate chronometer was required, which wasn't developed until the middle of the 18th Century. This, combined with the sextant and some calculations, allowed a position fix to be produced. As the fix was absolute, rather than a compass fix being relative to some landmarks, it also opened the way towards global scale mapping.
From World War 2, radio based position fixing systems were developed, which revolutionised navigation. On the vessel, radar enabled position to be fixed as a range and bearing from landmarks on shore. But the most significant changes were initially in terrestrial radio transmission based systems. The key terrestrial systems were the American LORAN-C (range 1500 miles, accuracy 0.1 - 0.25 miles). the similar Russian Chayka system, and the British Decca system (range 400 miles, accuracy from a few meters to 1 mile). Also, high precision systems such as Hi-Fix were developed for local. temporary deployment.
These systems were all doomed when satellite positioning became available, and all are now closed down. The first satellite navigation system widely available outside of the military, GPS, was developed to improve nuclear missile accuracy in the Cold War. The first satellites were launched in 1978, and in 1983 the US made civilian use of the system available, after a navigation error by a Korean civilian passenger plane took it over Russian territory and it was shot down. By the early 1990's all 24 satellites were operational, and the first lightweight, hand held receivers were becoming available. In May 2000, accuracy of civilian systems increased as Selective Availability was switched off, so civilian use signals were no longer degraded. In 1995 the Russian GLONASS system became operational, and many chip sets now support booth GPS and GLONASS. The Chinese Beidou system and the EU's Galileo are also being rolled out, along with numerous other regional systems.
Our testing of a large number of general marine GPS systems, from cheap USB GPS receivers through to systems for commercial ships showed a common level of accuracy to them all, with errors less than 5m 95% of the time in operational conditions. Professional surveyors are more likely to use systems using RTK or post processing techniques, and these can be accurate to a centimeter or better.
Depths on Charts
Indications of depths and shallows were first recorded in log books and sailing guides ("rutters"), and then in 1584 the first chart with depths on it was produced by the Dutch. From the beginning of the 19th Century, European states started setting up national Hydrographic Offices to provide charts to support their navies, as they had poorer charts than merchantmen buying charts from commercial publishers. They soon started selling their charts, which put out of business most of the independent commercial chart publishers. This was reinforced by the introduction of the Safety of Life at Sea (SOLAS) regulations after the sinking of the Titanic which, amongst other things, gave national hydrographic offices a monopoly to provide nautical charts to commercial ships.
On a paper chart drawn up to IHO standards, there is a source diagram tucked away somewhere that gives a thumbnail view of the chart, and shows the source of depth data on the chart - the date and the method of survey. From this, a pretty good estimate of the accuracy and number of soundings in the area can be made. On a digital chart to the IHO's S-57 standard, there are a number of parameters that show the accuracy and coverage of the chart, but few hydrographic offices fill all of these in, and the actual details of the source survey are generally not available. On commercially produced charts you are at the whim of the chart publisher, which often means that no information is given.
It must be remembered that no map is a real model of the world, but rather is created to meet the needs of the intended user. Nautical charts are no different in this respect. First, because they are designed for safe navigation, they use shoal bias, that is showing the shallowest depths in an area, which is great for navigators but doesn't meet the needs of many other users who want a more balanced view of the depths.
Secondly, they were designed for navigators to plot their tracks on. So the amount of information shown beyond that needed for safe navigation is reduced to allow enough white space for plotting tracks and positions on. In particular, depths deeper than the depth of the largest ship of the day may just be drawn sparsely, sufficient for the navigator to know that they can pass through those waters safely. Of course ships have got bigger, and with deeper draft, over time, so areas that may not have been deemed necessary to survey at the time may need surveying now to meet the needs of safe navigation. Although digital charts could overcome the problems of displaying less data on the charts, most digital charts are still basically just facsimiles of paper charts, and have yet to use the benefits that a digital approach could offer. This may change with the S-100 series of standards for digital charts, but as yet these have not been implemented in commercial products.
Finally, whilst the preponderance of GPS for positioning means that most charts have now been redrawn to the WGS84 horizontal datum, vertically charts refer to a local sea level. Whilst many hydrographic offices comply with the IHO's recommendation to use Lowest Astronomical Tide (LAT), or Mean Sea Level (MSL) in areas with minimal tidal ranges, some states still use other datums, e.g. Mean Lower Low Water (MLLW) in the USA. Unfortunately one hydrographic office's definition of LAT or MSL seldom corresponds to another's, and only in a few parts of the world are these mapped to an absolute vertical reference, be it the local land mapping datum or a global reference such as a geoid. This isn't a problem for navigation, but makes it difficult for any sort of global mapping to a common vertical datum.