Google Pixel 4 and 4a supports Live HDR+ and Dual Exposure, features first introduced with the Pixel 4 (note that these won’t be backported to older devices). The tech giant published a detailed blog post explaining how the two features work.

First, what is Live HDR+? It shows a real time preview of what the final Live HDR+ photo will look like. Note that this is just a preview derived using a different algorithm. The real Live HDR+ takes 3-15 underexposed photos (to reduce noise), aligns them and merges them.
After the merge, tone mapping is applied to produce the final photo where details in both highlights and shadows are visible. The phone computes a 2D histogram to achieve that, which is interesting to see.
However, current mobile chipsets don’t have the computational power to do that 30 times per second. Instead, a dash of AI is used. The image is sliced into small tiles and the AI predicts the tone mapping for each of them. Then every pixel on the viewfinder is computed as a combination of the tone maps from the nearest tiles.
Here’s a comparison between the predicted image and the actual Live HDR+ result. It doesn’t get it quite right, but it looks pretty close (especially since you’ll be viewing this on the phone’s screen).

Balancing highlights and shadows is done automatically by Live HDR+. The Dual Exposure sliders give you manual control over the process, so you can get the desired look for your photo in camera. Traditionally, this is something you would do afterwards by processing the RAW file.

They also introduce dual exposure controls. When the user taps on the Live HDR+ viewfinder, two sliders appear. The “Brightness” slider works like traditional exposure compensation, changing the overall exposure. This slider is used to recover more detail in bright skies, or intentionally blow out the background and make the subject more visible. The “Shadows” slider affects only dark areas — it operates by changing the tone mapping, not the exposure. This slider is most useful for high-contrast scenes, letting the user boost shadows to reveal details, or suppress them to create a silhouette.
If you want a more detailed explanation of how all of this works, you can follow this link or the source
Also read: Google “Pixel 5a” first mention and appears in AOSP