Mr. Robot is a show about a young, anti-social computer programmer, Elliot Alderson (Rami Malek), who works as a cybersecurity engineer during the day, but at night he is a vigilante hacker. It’s all about hiding, about feeling trapped and finding a new escape.
Elliot is recruited by the mysterious leader of an underground group of hackers to join their organisation, and help bring down corporate America, including the company whom he works for. Although he works for a corporation, his personal beliefs make it hard to resist the urge to take down the heads of multinational companies that he believes are running, and ruining, the world. His moral dilemma shines through his first person narration, his physical characterisation, but also cinematography. How he is always framed speaks the truth about the persecution he feels inside his head and how oppressed he believes the world is. But also that others are framed as well is this weird way means we are always inside Elliot’s head. The whole show is a Point of View exercise.
How? Have a look. This would be a standard, perfectly composed frame for a two person shot, either a master or an OTS (over the shoulder) shot.
And this is what Tod Campbell does with Mr. Robot:
Characters are often placed at the very bottom of the frame. This leaves massive amounts of headroom that suggests a great weight hanging over their heads, and echoes their isolation.
Lead Room and Negative Space
The lead room is the space at each side of the frame towards which the character is looking, or moving into. Usually, balanced framing would suggest to have more room in front of the character than behind to help convey the physical space that characters occupy.
So in Mr. Robot they’re going with no Lead Room. Campbell decides to “shortsight” the characters, positioning their faces at the edge of the frame closest toward the person to whom they’re speaking, emphasizing, once again, isolation. Even when they’re talking right to each other, they seem alone. They don’t know where they stand in relation to one another.
“Shortsighting is unnerving,” Campbell explains. “It further accentuates how fucked-up Elliot’s world is. The idea was to convey the loneliness. That’s the internal dialogue I had with myself: How do we tell that story? How do you get Elliot across?”
Rule of Thirds
The Rule of thirds is not necessarily broken, although used in a different manner. Instead of having the characters use up two thirds of the frame, leaving one for lead room, Tod decides to go with one third for the characters only. Wide shots also have a Rule of thirds approach, but it is the lines in the frame that determine the different areas, not the character’s size in it.
As for leading lines, these are widely used in consonance with the story. They are present in almost every shot, but specially in those that have to do with corporate America. Framings from the office and Elliot’s bosses show up these amazing diagonals that create overall unease and tension, but also describe the underlying feeling of imprisonment and corseted reality that Elliot faces in his day to day life.
And it’s not all there is to Mr. Robot’s cinematography. Talk about wide, round lenses (Cooke S5s), color, open apertures and shallow depth of field for an “in his head” feeling, etc. These are the reasons why I love this show. Oh, and the hacker stuff ;).
Russian photographer Anatoly Beloshchin has witnessed a lot of exciting adventures in his career — from getting to walk on the icy surface of remote Antarctica to swimming around with massive underwater creatures such as sharks, manatees and even whales.
Beloshchin and his team went scuba diving in one of the cenotes of Yucatán Peninsula. A cenote is a natural pit, or sinkhole, resulting from the collapse of limestone bedrock that exposes crystal clear groundwater underneath. Especially associated with the Yucatán Peninsula of Mexico, cenotes were sometimes used by the ancient Maya for sacrificial offerings.
These pictures were taken at the Cenote Angelita and spirals down almost 60 meters (200 feet). Beloshchin captured the peculiar underwater world in his photos. Fresh water with unlimited visibility makes up the first 30 meters (100 feet) and salt water the other half is separated by a mystical layer of hydrogen sulfate. This layer in the middle appears as a dense cloud from the top and strange colored hue from the bottom. Bring your dive lights, as you will need them if you are going to penetrate through to the bottom.
What does indeed look like flowing river is actually the result of a natural phenomenon called halocline. It appears when waters with different levels of saltiness form into layers because of their variation in density.
It’s eery and cool, right?
Anatoly Beloshchin is also an accomplished photographer on many other multidisciplinar areas:
As I was looking back at my summer photographs, underwater GoPro for the most part, I began to realise how recent it is that underwater photography is common to everyday life. Practically anyone today can instagram and share a snap or two from the surfing vacation, snorkelling, wakeboarding or simply chilling in the pool, but I still remember my teenage me wanting a camera case to go dive with it, and then being still too expensive. This wasn’t, relatively, that long ago, and Bruce Mozert, 99 this year, already accomplished a self-made housing case in his twenties.
Bruce Mozert is considered to be a pioneer of underwater photography and his images of Silver Springs, Florida, were widely broadcast during the early and mid 20th century.
He graduated high school and took a job as a truck driver that brought coal to New Jersey, but quickly decided he was “too sensitive to be a truck driver” and moved to New York City to live with his sister, model and pin-up artist Zoë Mozert. By the age of 20 he was already an accomplished photographer in NYC. Born in Newark (1916) as Bruce Moser, Ohio, moved to Silver Springs in the fall of 1938 when they were making Tarzan films, and soon after arriving he built his first underwater camera case.
For some 45 years (except for service with the Army Air Forces during World War II), he created scenes of people —comely young women, for the most part— doing ordinary tasks that would be done on land, such as talking on the phone, cooking, playing golf, reading the newspaper…underwater, all the better to show off the wondrous clarity of Silver Springs’ waters. Most of the women were actually employees of Silver Springs and one of his most frequently shot models, Ginger Stanley, was an underwater stunt double for Creature from the Black Lagoon. He also took underwater movie stills for the many productions filmed in Silver Springs. Above the water, he took pictures of visitors going on glass bottom boat tours, developed the film while they were on the tour, and then had the photos ready to sell to visitors when they returned.
“My imagination runs away with me”
Some of the tricks used include some dry ice or Alka-Seltzer in the glass to create bubbles in a champagne flute; to simulate smoke rising from a grill, he used canned condensed milk. “The fat in the milk would cause it to rise, creating ‘smoke’ for a long time,” he says. With his meticulous production values and surreal vision, Mozert cast Silver Springs in a light perfectly suited to postwar America. His images anchored a national publicity campaign for the springs from the 1940s through the ’70s; competing against water-skiing shows, dancing porpoises, leaping whales and hungry alligators, Silver Springs remained one of Florida’s premier attractions, the Disney World of its day. Then, in 1971, came Disney World. The playful collection has recently come to light as several of the photographs are currently being exhibited at the Holden Luntz Gallery in Palm Beach, Florida.
The pictures he took were so clear that MGM took them to Hollywood, where Mozert continued to develop his ideas – including the first high-speed camera case and first underwater lighting. He then worked as an underwater motion picture cameraman for NBC, ABC, CBS and many Hollywood productions.
Mozert now works out of his studio in Ocala, Florida, where he mainly digitizes customers’ home movies. At 91 he was still piloting his own plane and accepting the occasional commission for aerial photographs.
You can acquire his book Silver Springs — The Underwater Photography of Bruce Mozertthrough his website and also separate prints. And if you’re interested in his story, he can also be reached at: email@example.com.
In digital imaging, colorimeters are tristimulus devices used for color calibration. Accurate color profiles ensure consistency throughout the imaging workflow, from acquisition to output. By using a colorimeter we can measure the amount of each of the three primary colors in the mix:
where S1, S2 and S3 can be either positive or negative.
An equal energy white will clearly be given by 3 components of the same value, but, because the scale/precision of each color axis is different (due to human color sensitivity curve), an equal energy white would be more likely to be as follows, which, normalised by each color axis, should then produce a white that is of equal energy.
Another issue is representing a 3D system of color by dividing each color component by the luminance, so that we can model color as a 2D triangle.
HSL and HSV
HSL (hue-saturation-lightness) and HSV (hue-saturation-value) are the two most common cylindrical-coordinate representations of points in an RGB color model. The two representations rearrange the geometry of RGB in an attempt to be more intuitive and perceptually relevant than the cartesian (cube) representation, by mapping the values into a cylinder loosely inspired by a traditional color wheel.
The angle around the central vertical axis corresponds to “hue” and the distance from the axis corresponds to “saturation”. These first two values give the two schemes the ‘H’ and ‘S’ in their names. The height corresponds to a third value, the system’s representation of the perceived luminance in relation to the saturation.
Perceived luminance is a notoriously difficult aspect of color to represent in a digital format, and this has given rise to two systems attempting to solve this issue: HSL (L for lightness) and HSV or HSB (V for value / B for brightness). A third model, HSI (I for intensity), common in computer vision applications, attempts to balance the advantages and disadvantages of the other two systems.
Other, more computationally intensive models, such as CIELAB or CIECAM02 are said to better achieve the goal of accurate and uniform color display, but their adoption has been slow. This 4-color system is the base of color processing on digital photographic development. Programs like Lightroom or Photoshop Camera Raw show a and b controls under the titles Temperature and Tint
RGB to HSL
Back to HSL… to calculate an HSL value from a RGB value we need to know which are the maximum and minimum components from the sample.
Say, for example, Raspberry: RGB(214, 39, 134). The maximum would be R and the minimum G, being B the medium value. This value would lie in the sixth section of the following diagram where the 255 possible values have been divided into six equal sections.
So the values would be as follows:
A similar approach for calculating the hue is as follows. As hue is perceived to be circular it is very intuitive to use degrees instead of values, so for this case we would just have to divide 360º of the spectrum into 6 sections.
Chroma and Saturation
Because these definitions of saturation – in which very dark (in both models) or very light (in HSL) near-neutral colors, for instance, are considered fully saturated – conflict with the intuitive notion of color purity, often a conic or bi-conic solid is drawn instead, with what this article calls chroma as its radial dimension, instead of saturation.
Some useful definitions to avoid misunderstandings are:
Intensity radiance: The total amount of light passing through a particular area.
Chroma: The colorfulness (amount of color) relative to the brightness of a similarly illuminated white.
Saturation: The colorfulness (amount of color) of a stimulus relative to its own brightness.
Hue and Chroma
Both hue and chroma are defined based on the projection of the RGB cube onto a hexagon in the “chromaticity plane”. The projection takes the shape of a hexagon, with red, yellow, green, cyan, blue, and magenta at its corners. Chroma is the relative size of the hexagon passing through a point (modulus of the point from the origin), and hue is how far around that hexagon’s edge the point lies (angle of the vector to a point in the projection,with red at 0°).
More precisely, both hue and chroma in this model are defined with respect to the hexagonal shape of the projection. The chroma is the proportion of the distance from the origin to the edge of the hexagon. In the lower part of the diagram to the right, this is the ratio of lengths OP/OP′, or alternately the ratio of the radii of the two hexagons. This ratio is the difference between the largest and smallest values among R, G, or B in a color. To make our definitions easier to write, we’ll define these maximum and minimum component values as M and m, respectively.
To understand why chroma can be written as M − m, notice that any neutral color, with R = G = B, projects onto the origin and so has 0 chroma. Thus if we add or subtract the same amount from all three of R, G, and B, we move vertically within our tilted cube, and do not change the projection. Therefore, the two colors (R, G, B) and (R − m, G − m, B − m) project on the same point, and have the same chroma. The chroma of a color with one of its components equal to zero (m = 0) is simply the maximum of the other two components. This chroma is M in the particular case of a color with a zero component, and M − m in general.
The hue is the proportion of the distance around the edge of the hexagon which passes through the projected point, originally measured on the range[0, 1] or [0,255] but now typically measured in degrees [0°, 360°]. For points which project onto the origin in the chromaticity plane (i.e., grays), hue is undefined.
Sometimes for image analysis applications, this hexagon-to-circle transformation is skipped, and hue and chroma (we’ll denote these H2 and C2) are defined by the usual cartesian-to-polar coordinate transformations (right). The easiest way to derive those is via a pair of cartesian chromaticity coordinates which we’ll call α and β:
(The atan2 function, a “two-argument arctangent”, computes the angle from a cartesian coordinate pair. The first argument is the vertical or y-axis value, and the second argument is the horizontal or x-axis value. In some computer programs, like Excel, the order is reversed.)
Notice that these two definitions of hue (H and H2) nearly coincide, with a maximum difference between them for any color of about 1.12° – which occurs at twelve particular hues, for instance H = 13.38°, H2 = 12.26° – and with H = H2 for every multiple of 30°. The two definitions of chroma (C andC2) differ more substantially: they are equal at the corners of our hexagon, but at points halfway between two corners, such as H = H2 = 30°, we have C = 1, but C2 = √¾ ≈ 0.866, a difference of about 13.4%.
In color reproduction, including computer graphics and photography, the gamut, or color gamut, is a certain complete subset of colors. The most common usage refers to the subset of colors which can be accurately represented in a given circumstance, such as within a given color space or by a certain output device.
Another sense, less frequently used but not less correct, refers to the complete set of colors found within an image at a given time. In this context, digitalizing a photograph, converting a digital image to a different color space, or outputting it to a given medium using a certain output device generally alters its gamut, in the sense that some of the colors in the original are lost in the process.
If we represent a gamut as a simplistic triangle, white would be in its center. The sum of any two colors is represented by a vector addition of each of their paths from the center. Hue is given by the direction of the vector and saturation by its modulus.
It is clear from the comparison that ProPhoto’s gamut is the widest and that’s why it is the gamut used in professional photography. When rescaling to sRGB for instance, like mosts screens are, many colors have to be resampled changing the brightness and hue probably. This effect is called gamut clipping.
Last week I had my sister visit me in Berlin and I used the GoPro to record while we rode our bikes. It’s not that I intended to use that footage for anything special further on, but I’m now thinking I will keep it and try some hyperlapse on them.
Hyperlapse is the new trend that extends time lapse shooting. It is based on single still frames instead of just speeding up a video, plus the camera is also moving, so more movement is actually involved in between shots. It’s like a time lapse inside of a time lapse… — a meta time lapse video!
It was first used to incorporate HDR into motion pictures by simply mounting together many different HDRed pictures into a video, but it’s use has since evolved to achieve other amusing effects. Check out this recently published video by Rob Whitworth (who already recorded the famous flow motion video for Barcelona as well) to see what hyperlapse looks like.
How to achieve a simple hyperlapse video
If you’ve been toying around with time-lapse photography and want a fun photography project for the weekend, this quick hyperlapse how-to from the fun folks over at DigitalRevTV has you covered.
Kai wisely teams up with expert Patrick Cheung to let you know how it is done.
Google Maps hyperlapsing
First person stabilisation hyperlapse
Here’s where my GoPro story comes in. There’s a bunch of guys at Microsoft Research in Redmond (WA) working on an algorithm to stabilise high speed first person camera footage. When filmed though one of these, attached to your forehead or chest, the world around you becomes so shaky that high speed videos are almost unbearable to watch and irritating, and stabilising is difficult as hell. Johannes Kopf, Michael Cohen and Richard Szeliski promise it will reach the public as an app in a near future.
The technology involved recreates a 3D version of the world described in the footage, placing each frame where it would correspond in space and tracking the camera movement. This real camera path will be then simplified to a spline that would take the viewer through a new, smoother, path by repainting the surroundings via patches from the matching available frames. Lovely.
Here’s the video that accompanied their SIGGRAPH paper this month, to show you better what they’re doing:
And finally, as a bonus, here’s a sweet as pie, happy coloured, hyperlapse video to enhance our summer afternoons.
.The first color photograph was made in 1861 by James Clark Maxwell (the handsome dude you see to the right). Maxwell studied the human eye to find that our eyes were sensitive only to red, green, and blue light.
Before long, Maxwell had developed a method (now called the Harris Shutter effect) to mimic our eyesight and make color photographs by making three black & white pictures: One with a red filter over his lens, one with a green filter, and one with a blue filter.
When he combined them together, photo magic happened and the color photograph was born!
Let’s play with this!
So now T_Paul at RetouchPRO is proposing a fun challenge: to re-construct a color image from a film roll with 3 different black & white shots that clearly belong to each one of the three RGB channels.
Here’s the process I followed to obtain the following result:
Aligning the three layers was a bit tricky, because as they are shot in turn they’re not exactly the same. Specially the guy in the middle couldn’t hold it and moved significantly. So I first attempted an automatic alignment (in Photoshop Edit>>Auto-Align Layers) cutting each one of them from the strip and placing them as 3 different layers in a new image.
To adjust minor alignment issues your can play around with opacity to visualise a layer and the one right below. At this point it is sufficient just to be sure the logs, that obviously didn’t move, are pretty straight. We’ll deal with the guys later on.
Rough color correction
I later saved each layer as a different image that would become my red, blue and green filters and loaded them as channels on a new RGB multichannel image (remember: in PS Mode>>RGB).
How to know which is which is merely intuitive. The higher the amount of white, the more of that color you will obtain in the final mix. Therefore faces should be pretty dark in the blue filter, and skies darker for the red filter… and so on. According to this theory, you can instantly recognise what’s going to happen. The red filter is so light that there is going to be far too much red component in the composition.
To roughly compensate the filters let’s apply a level correction to each one of them first:
Further color correct
Several adjustment tone and color calibration layers later, the image looks like this:
[one_half]For a more detailed, zone specific color correction, you can treat each channel separately. By following simple color rules you can manage to change a wrong color only altering 33% of the information in the area. This is a simple RGB color wheel where you can see the three primary colors, together with their secondary colors. If you want to remove a red blemish, you’ll need to go darker on the red, but also perhaps it is a good idea to go lighter on the green layer as it is magenta’s complementary color.[/one_half][one_half_last]
In this particular photograph I dealt with yellow spots due to small aberrations, which I solved by simply painting on the blue filter with white on “lighten” multiplicity mode.
In optics, chromatic aberration is a type of distortion in which there is a failure of a lens to focus all colors to the same convergence point. It manifests itself as “fringes” of color along boundaries that separate dark and bright parts of the image. In our example it is due to movement. Not all three filters overlap exactly the same so you can get fringes of yellow, magenta or cian here and there. In this occasion I corrected it locally by manually adapting the alignment of only the channel that is off. See the results:
A mixture of both techniques, color correction by channel and chromatic aberration adjustment, were used in this guy’s face:
As for image restoration, I had to get rid of all artifacts on the image. Most of them would be due to weathering on either one of the filters, so instead of flattening out and working on a composite image, I preferred to go on and heal each one of them per filter.
As seen above:
CYAN imperfections are corrected in the RED filter
YELLOW imperfections are corrected in the BLUE filter, and
MAGENTA imperfections are corrected in the GREEN filter
And finally, general retouching. And by this I mean to flatten the image and apply levels, toning and, like I did in this case, frequency separation technique to increase definition on the main subject and obscure disturbing details.
Some time ago I ordered an album for my latest trip at Snappybook. I had a few issues with classic Hofmann’s software installation and decided to look for more options. I discovered that Snappybook was highly recommended in many, many forums and went for it.
I should point out that the attention received was great. I had mistakenly repeated two spreads and they gave me a call to check on that before printing. Kudos to them!
This was the weakest link in my opinion. It runs natively in MacOs X but it is too simple for my liking. I ended up setting out the layout and creating the spreads on InDesign and then importing into the snappybook software full spread images, instead of all the pictures. No problem from then on. If you work at the correct size and resolution there should be no issues with this method and allows you to play more freely with typography and guides… …
Tony Duran is an american photographer, known for his celebrity portraits and work with masculine models.
Duran is the man to whom everybody turns for their risky photoshoots. His scandalous style in blanck and white, leaving very little to the imagination, has made him the preferred choice by A-listers when they want to be provocative. From Beyonce, Jennifer Lopez, Natalie Portman and Scarlett Johansson, to Rene Russo, Brooke Shields or Pamela Anderson, Duran is one of the most largely published celebrity photographers.
There’s always a minute or two to admire Nick Brandt’s photography. His african animal portraits are regal and epic, showcasing their mysterious ways and ferocious manners and yet their old and candid wisdom look.
It’s interesting to see how a successful music video maker ended up forging a career out of his photography solely in Africa. Brandt studied Painting, and then Film at St. Martins School of Art and moved to the United States in 1992, where he directed many award-winning music videos for the likes of Michael Jackson (Earth Song, Stranger in Moscow and Cry), Moby, Jewel…
It was while directing “Earth Song” in 1995, in Tanzania, that Brandt fell in love with the animals and land of East Africa. Over the next few years, frustrated that he could not capture on film his feelings about and love for animals, he realized there was a way to achieve this through photography, in a way that he felt no-one had really done before.
On medium-format and B&W film without telephoto or zoom lenses, his photography bears little relation to the colour documentary-style wildlife photography that is the norm. Animals in his portraits seem human-like, while he manages to capture them as if they, themselves, would’ve asked for the photo taken instead of stealing snaps from behind the bushes.
Big Life Foundation
In September 2010, in urgent response to the escalation of poaching in Africa due to increased demand from the Far East, Nick Brandt founded the non-profit organization called Big Life Foundation, dedicated to the conservation of Africa’s wildlife and ecosystems.
Music video direction, photographs that will take your breath away and a non-profit organization protecting animals… this sounds like a win-win life adventure. …
Originally from Tegucigalpa, Honduras, he’s an LA-based photographer whose gorgeous shots can easily range from sweet and demure to downright sexy. You can clearly see the influence from all time fashion and women photographers like Helmut Newton, David Bellemare or a mock of Terry Richardson’s work.
He started his photography career pretty much on accident about three years ago after receiving a digital camera for Christmas, and his girlfriend back then served as the model. His work is natural, soft, voyeuristic and erotic, with a hipster kind of sensibility regarding women, their styling, hair and make-up…
No matter how much shit is thrown at you, know that right around the next corner something epic is going to happen.
And I felt really in the need for a quote like this today. Let’s hope it’s real, though.
Jordan Matter, who was living the dream as a successful actor, wasn’t interested in carrying the lens around his neck until an exhibit by another world-famous photographer awakened in him something he didn’t know existed. Kismet! What started as a passion grew into a bustling career, and in 2009 Matter began work on his Dancers Among Us project, in which he photographs “professional dancers in everyday situations throughout America”. Watch for the book from Workman Publishing, due out this fall (October 21).
Tim Tadder is not only a sports photographer, although he started out in sports events and his sport shoots are gorgeous. Tadder’s images feature strong foreground subjects as the principal graphic element, usually poised at that peak moment of high-wire, fuel-injected tension. His style is fantastic for advertising, which he has been doing since 2005. He, together with two assistants, a producer and a digital tech, travel by bike every morning to Cardiff, San Diego, to their studio just beside the sea.
“Classic with a great ocean view and fresh breeze all day.”
I absolutely love storms, unlike my grandmother. She’s lucky she lives in Valencia. People who know me know that I wouldn’t mind getting wet if I could take a near to decent photograph of a lightning. …
This is the title of the new book published by Joachim Baldauf’s fashion photography group at HTW-Berlin. It is a survey on the work donde during the semester, that culminated with the exhibition “Semipermeable” in the Seven Star Gallery. This book follows the lessons by famous photographer Baldauf to his pupils, and showcasts the resulting images.
The book was presented at the Werkschau (workshop) in HTW during July 13 and 14, and copies can be acquired via the project’s website: www.semipermeabel.com.
* Rights of the photographs in the magazine belong to their authors