Rambus unveils ‘Binary Pixel’ sensor tech for expanded dynamic range










Share:








US technology company Rambus has unveiled ‘Binary Pixel’ sensor technology, promising greatly expanded dynamic range for the small sensors used in devices such as smartphones. Current image sensors are unable to record light above a specific saturation point, which results in clipped highlights. Binary Pixel technology gets around this by recording when a pixel has received a certain amount of light, then resetting it and in effect restarting the exposure. The result is significantly expanded dynamic range from a single-shot exposure. The company has demonstrated the technology using a low resolution (128 x 128 pixel) sensor, and says it can easily be incorporated into CMOS sensors using current manufacturing methods.

Aside from the ‘temporal oversampling’ described above, Binary Pixel technology employs a couple of further innovations. It uses Binary Operation, sensing photons using discrete thresholds which the company says is similar to the human eye for better sensitivity across gamut of dark to bright. It also employs Spatial Oversampling, meaning the individual pixels are sub-divided to capture more data and improve dynamic range. The technology isn’t restricted to phone sensors, and in principle should work equally well for all sensor sizes.

Rambus lists the key advantages of Binary Pixel sensors as follows:

Ultra-High Dynamic Range
• Optimized at the pixel level for DSLR-quality dynamic range in mobile and consumer cameras 

Single-Shot HDR Photos & Videos
• Operates in a single exposure period to capture HDR images real-time with no post processing

Improved Low-Light Sensitivity
• Spatial and temporal oversampling reduces noise and graininess

Works with Current Mobile Platform
• Designed to integrate with current SoCs, be manufactured using current CMOS technology, and fit in a comparable form-factor, cost and power envelope

Press release:

Rambus Unveils Binary Pixel Technology For Dramatically Improved Image Quality in Mobile Devices

 Image comparison illustrating the theoretical benefits of the Binary Pixel Imager 

Breakthrough technology Provides Single-Shot High Dynamic Range and Improved Low-Light Sensitivity in a Single Exposure

SUNNYVALE, CALIFORNIA AND BARCELONA, SPAIN – February 25, 2013 – Rambus Inc. (NASDAQ: RMBS), the innovative technology solutions company that brings invention to market, today unveiled breakthrough binary pixel technology that dramatically improves the quality of photos taken from mobile devices. The Rambus Binary Pixel technology includes image sensor and image processing architectures with single-shot high dynamic range (HDR) and improved low-light sensitivity for better videos and photos in any lighting condition.

“Today’s compact mainstream sensors are only able to capture a fraction of what the human eye can see,” said Dr. Martin Scott, chief technology officer at Rambus. “Our breakthrough binary pixel technology enables a tremendous performance improvement for compact imagers capable of ultra high-quality photos and videos from mobile devices.”

As improvements are made in resolution and responsiveness, more and more consumers are using the camera functionality on their smart phone as the primary method for taking photos and capturing memories. However, high contrast scenes typical in daily life, such as bright landscapes, sunset portraits, and scenes with both sunlight and shadow, are difficult to capture with today’s compact mobile sensors – the range of bright and dark details in these scenes simply exceeds the limited dynamic range of mainstream CMOS imagers.

This binary pixel technology is optimized at the pixel level to sense light similar to the human eye while maintaining comparable form factor, cost and power of today’s mobile and consumer imagers. The results are professional-quality images and videos from mobile devices that capture the full gamut of details in dark and bright intensities.

Benefits of binary pixel technology:

  • Improved image quality optimized at the pixel level
  • Single-shot HDR photo and video capture operates at high-speed frame-rates
  • Improved signal-to-noise performance in low-light conditions
  • Silicon-proven technology for mobile form factors
  • Easily integratable into existing SoC architectures
  • Compatible with current CMOS image sensor process technology

The Rambus binary pixel has been demonstrated in a proof-of-concept test-chip and the technology is currently available for integration into future mobile and consumer image sensors. For additional information visit www.rambus.com/binarypixel







Comments


forpetessake

Going to the logical end, a single pixel can work in a binary mode (detecting light or darkness) similar to dithered images of printers. When they create pixels with 50nm pitch then it can be possible to have image quality close to today’s sensors.

But pretty soon, they will hit the law of diminishing returns, the quantum noise is determined by the total light collected by the surface of the sensor. For example, even ideal (noiseless) micro 4/3 sensor will not be able to achieve the performance of today’s (not ideal) FF sensor.


Eric Fossum

why 50 nm? What are your assumptions?
You are also forgetting about improvements in QE.


plasnu

I prefer left picture (conventional) to the right picture (hdr).


WilliamJ

You’re right that photography is about playing with light, showing and hidding, stimulating the imagination by playing with the shadows. Showing everything is to photography what an instruction booklet is to litterature.

Ideally, a nice camera should propose the degree of Dynamic Range as do Fujifilm with its EXR. With an EXR camera, you can choose on a scale of 100%, 200% and 400%, which proves by the way that the Fujifilm engineers are not just “camera for fun” designers, but do really understand what photography is about.


sportyaccordy

I feel like a lower pixel density could yield a much wider dynamic range. But if this tech lets us have the best of both worlds, I can’t complain.


Eric Fossum

I should mention here that I have been working with Rambus for a few years and they fund our Quanta Image Sensor R&D at Dartmouth, along with other projects elsewhere. I believe their strategy is to invest in R&D that yields fundamental IP in advanced areas. It is an interesting business model and one that actually supports innovation and technology development even if they are ultimately a non-practicing entity.


Nigel_L

Reminds me somewhat of a forum post I made a few years ago…

See http://forums.dpreview.com/forums/post/18895569

Regards, Nigel


Eric Fossum

indeed Nigel. Ahead of your time, but I was ahead of you by “a bit” as were a few others.


Nigel_L

Hi Eric, you are right – I also see some related comments from Roland Karlsson. Hopefully these ideas will translate into real products sometime soon.

Regards, Nigel


Eric Fossum

I made some comments on this in the news forum yesterday including a reference to technical work they published. Bottom line, I am a big fan of binary pixels and oversampling (spatially and temporally) and believe this is where things are headed even for large sensor cameras.

I see there is some wrong information being circulated in this comments section so readers, beware!


Steve D Yue

at the moment a single reset will introduce delays

if one needed more than one reset, that makes it ineffective

it’s better to have pixels capable of pixel level control (Canon has this already patented) to handle different ISO ‘choices’ pre-selected by the shooter, which allows for multi-ISO selection according to multiple levels of light (zone system of ISOs!!!) where no ‘resetting delays’ are introduced nor involved.

one then could have ‘final image simulation’ exposure chosen base on more than one ISO setting, each aimed at different light levels seen by the sensor (Natural DR ExpSim LV)

if not multi-ISO capability, at least dual-small-large-type-pair pixel sensors can be dedicated to handling both ends of the light extremes (bright vs dark) instantly with no resets and no delays

fujifilm already has dual-type pixels but only handle at single ISO levels chosen by the shooter (and fujifilm doesn’t have exp sim lv at all anyway; making multi-ISOs impractical)

sdyue


Steve D Yue

this is exactly what i’ve been talking about…!!!

dual-type-pixels (binning), but using a smaller pixel for brighter light and larger pixel for lower light

meaning a 23Mp image is made from 46Mp sensor with ‘dual-(small-big)-pixel-pairs’

sdyue


JRFlorendo

If that’s the case, Rambus just infringed on Fuji’s S5 pro’s super ccd technology. I’m not sure though, seems like the same scheme.


Peter G

No, that isn’t what this is. You are describing Fuji Dual sensor strategy they have been using for years.

This is not two sensors. It is one sensor, and a simply digital bit of storage to tell when the sensor “rolled over”.


Peiasdf

Sadly the company is RAMBUS so it means this technology will be very expensive and likely replaced be something better and cheaper 3 months after release but RAMBUS will spend the next 5 years suing everyone to prevent the adaptation of the cheaper technology.


D1N0

Great, more bland pics.


Funduro

This new technology can/might be a game changer. It’s “HDR” abilities will create some great looking images in a PnS or smartphone.


Charles C Lloyd

Good to see some actual innovation in the sensor arena. Far too much of digital photography hitherto has been in replacing film, but the opportunities in digital extend well beyond that. This is a great idea and I see no reason why they couldn’t reset the pixels more than once — just keep track of the number of resets and add enough bits to count the resets. Two bits gives you four resets, that should do it.


bobbarber

Would more than one reset be necessary? I’m trying to think of a situation.


peevee1

1 bit to count resets = 1 EV expansion of DR. 4 bits (16 resets) = 4EV. Then current 10EV P&S sensors will match FF.


LightBug

Watch out camera industry, lawsuits are coming!


Steve D Yue

actually the idea of pixels handling more than one level of light (or more than one sensitivity) has been around awhile, and certainly to have already been patented by camera manufacturers for awhile

even pixel level control of exposures via multi-ISO capability, too (several years ago)

so, their patent may simply be a variational workaround to avoid other patent conflicts. if not, they’re also going to get lawsuits too…

no one is going to use an idea if it isn’t ideal for mfrs

sdyue


JordanAT

I’m not sure this is so fabulous for most “small sensor” cameras, except that the bulk of the sensors produced are of that type. I generally don’t have a driving need for expanding the bright side of the dynamic range in 1/2.3 (or smaller) cameras – what I really need is more sensitivity in dark scenes.

While you could claim that this allows getting more light to the sensor without blowing highlights…yes – sort of. Most exposures are limited by absolute duration (shake/subject movement) and not the fear of clipping on highlights.


lost_in_utah

More like US Patent Troll.


Steve D Yue

agree

but that’s the nature of patents… that is, every variation is ‘legit’…

sdyue


forpetessake

“but that’s the nature of patents…”
Not at all, the idea of patents was to prevent unfair competition by stealing somebody’s ideas. Patent trolls are basically in a business of planting minefields hoping somebody steps on them. The former is constructive the latter is destructive.


AbrasiveReducer

If this can hold the highlights without flattening everything (HDR Smokevision effect) it would be great and a shame to waste on cellphones.


plasnu

Is this something like floating point?


forpetessake

This is essentially the same idea as multiple exposures using electronic shutter. For example, you take 4 normal exposures and merge them into a single image, you get 2 times better SNR (and dynamic range) and effectively pushing ISO 4 times lower. You can do it today with cameras like Sony NEX, except the shutter is not electronic, it’s mechanical, so there is problem with moving subjects.
On the subject of the dynamic range. The displays and prints have a lot more limited dynamic range than modern sensors. In order to display higher dynamic range you need to compress it, the more you compress, the less natural image looks. Until displays with much better dynamic range are built, increasing dynamic range of the sensor has little advantages.


Peter G

No it isn’t like that at all.

It will just take one normal length exposure. Only what would be the formerly blown out pixels will capture additional info, but it will still be during the regular exposure time.


Steve D Yue

taking multiple exposures defeats the whole point

key is doing it in a single take

but for me, sensors capable of pre-setting mutliple ISOs for different pixels according to light levels makes more sense

sdyue


forpetessake

‘It will just take one normal length exposure.’ — it is and it isn’t. It’s done by reducing ISO of the sensor (compared to traditional implementation), after that it’s normal, but exactly the same thing happens with multiple exposures, except all pixels are reset, not just those that otherwise would be saturated.

‘key is doing it in a single take’ — it’s defacto electronic shutter, what is one take? and who cares?


Peter G

Essentially a counter and reset for the pixel bucket.

Since they call it binary, I will assume that for now, the counter is essentially just one bit.

You can probably do this with just a few transistors, that will trip automatically when the pixel bucket hits full. It sets one bit cleans the bucket, and start collecting again.

One of those obvious in hindsight ideas that should really work out well.

I am just sad that patent troll Rambus thought of it first.


Steve D Yue

actually, they’ve only thought of a variation of it, not exactly the first.

there are other ways to do this, before they even thought of it ‘first’.

and nothing wrong with ‘patent variation’ as this is the way patents work

sdyue


Roland Karlsson

I see no technical explanation on how it works. Any one knows? Or have a pointer?


Karroly

I may be wrong, but I think the idea behind that can be explained as follows :
A photosite can be seen as a bucket that is being filled up with electrons when exposed to light. Overexposure occurs when the bucket overflows.
But filling the bucket is not instantaneous. It looks to me like Rambus brings up a new technology that allows to monitor the bucket level. Then it is possible to empty (“reset”) the bucket (and memorize it was filled up once, and maybe more than once) and restart filling it until the shutter closes.
The final electrical level corresponding to the total amount of light received by the photosite is then the sum of as many as necessary full buckets and the last partially filled bucket.
Highly sensitive photosites are quickly saturated. But with this new technology, saturation is no longer a problem.
So the advantages are both in lowlight capability and dynamic range.


Clear as Crystal

Yes thats a good analogy for how a pixel works. One thing that has just struck me though is that twice the number of possible charge levels would mean twice the information that needs to be stored meaning twice the file size.


Peter G

Twice the charge levels, is actually only one more bit of storage per pixel and since files sizes are often already deeper than actual dynamic range, no real file format change is really needed.

But files will likely be a little bit less compressible because they will contain a bit more data.


Clear as Crystal

Well spotted, I stand corrected.


Clear as Crystal

Sounds a great idea. Only problem I can see is if the time to reset the pixel is significant compared to the exposure time. In that case the pixel wouldn’t gain any extra charge during the reset and this would leave a plateau in the signal before increasing again, giving a lower value than it really should be.
Nothing says it needs to stop at one reset either. If this works consistently it could be a really impressive next step for sensors.


Peter G

Reset time is an issue, but I suspect it isn’t significant. You could also apply a small correction factor to help with that anyway.

You could do more than once, but it increases the circuit complexity per/pixel for what is like quickly diminishing returns except in extreme HDR photography.


Clear as Crystal

The way I see it resetting the pixel would work well as long as it has the chance to reset and start gathering more electrons. As you say in that case you just add a correction factor, the problem would be if the reset time was significant and the pixel was reset but hadn’t started the new collection yet. In that case you cant add a correction since you don’t know how many to add.

Perhaps an alternative is to record the time needed to reach full then use that to give an estimated value for the full exposure time. Rambus if your listening feel free to make me an offer for using that idea :)


Steve D Yue

actually their patent relies on a single type pixel rather than dedicated two-type pixel pairs

any resetting is inefficient compared to ‘single-setting-first-time’ depending on light levels of dedicated dual-pair type pixels, which means a new dual-(small-large)-pair type pixel sensor is required instead (which is no more difficult to manufacture than a single one that needs to ‘reset’)

in a dual-pair type pixels i’m thinking of:
when exposure starts, brightly lit areas are instantly handled by smaller pixels and poorly low lit areas are instantly handled by larger pixels, so there is no need for ‘resetting’ at all in time, as both happen at the beginning moment of exposure without delay

if anyone is thinnking of this, Canon is most likely already doing this, but deciding whether to fully release it or not (they’re very likely already testing it for awhile (as may be others, like Fujifilme))

sdyue


samhain

…Are they publicly traded?


Mescalamba

Its quite interesting, but given Rambus history of “patents” (they are great patent trolls) I wouldnt put much faith into it.

Ofc in theory it should work..


shaocaholica

Resetting a photosite mid exposure is still 2 exposures.


Rachotilko

But not all of them at the same time ! That’s the trick …

– in conventional sensor, there is one readout time

– in Fuji EXR, there are two different readouts for two groups of the sensor pixels.

– in this Rambus tech, each pixel is reset when it needs it. Much more efficient use of the sensor area than the Fuji approach


Steve D Yue

agree, and the hardware to do the resetting (which introduces delay) is just as complicated as having two different pixels (fujifilm)

the hardware for ‘one resettable pixel’ is like have two different pixels anyway, but introduces resetting delay inefficiencies

sdyue


Tazz93

So it sounds like they finally found a way to do ‘native’ HDR blending/variable pixel exposure on the sensor. I’ve often wondered what types of challenges there was to that, hopefully someone will explain.


OniMirage

Wow this is going to be fantastic and I hope all sensors go this route.


Steen Bay

Increasing the saturation capasity by resetting the pixels is in practice the same as lowering the sensors base ISO. A 1/2.3″ sensor could have the same IQ at ISO 3 as a FF camera has at ISO 100, but the downside is that shooting at ISO 3 most often will require a rather long/slow shutterspeed, so it’ll only work with static scenes/subjects.


Greg Lovern

How do you figure that resetting a pixel during exposure, to allow collecting more photons without blowing out to white, is the same as lowering the base ISO and so would require much longer shutter speeds?

Compare two otherwise identical cameras taking the same shot with the same settings; one camera has this technology and the other does not.

In the camera without this technology, some highlights are blown to white. In the camera with this technology, with the same shutter speed and other settings, more photons are collected in the highlights, each photosite collecting a slightly different number of photons than the next, so the highlights are not blown to white.

Where is the longer shutter speed in that scenario?


Steen Bay

If the highlights are blown, then the shutterspeed was to long/slow or the ISO to high. The solution is to use a faster shutterspeed or a lower ISO.


bobbarber

I’m not sure you’re right about that Steen Bay. Lowering ISO or increasing shutter speed doesn’t increase dynamic range. You save your highlights, but the shadows become black or noisy, especially with a small sensor. The difference with this technique, I’m assuming, is that you give sufficient exposure to the low end.

I’ve been wondering for a long time why somebody hasn’t just done this. I don’t understand the physics of sensors, but it seems obvious that you should know when a given pixel blows out to white and be able to do something with that information.


tompabes2

Great idea! HDR with one shot! The naysayers have already gathered and are posting at full speed… 😉


Nikolaï

typo:

“Low-Light Sensitivity in a Single Explosure”… this thing will be da bomb when it comes out… sorry couldn’t help myself.


joyclick

Let us give it to them.OK folks.Let us have some small sensor cameras in our hands and ‘see it ‘ for ourselves.Whenever that be.Sooner the better Rambus.


Jono2012

Another great idea in theory…..


Rachotilko

I won’t criticize it since I don’t understand the details. However, I do have some superficial hypothetical opinions regarding the idea.

1, From the short description it seems as if it actually choses to overexpose and takes care of the overexposed pixels via their resetting mechanism.

2, It sounds as a smarter version of Fujifilm EXR mechanism. Similarly as in the case of EXR sensor, one group of the pixels is used for capturing the shadows, the other group is used for capturing the highlights. But in the case of the EXR, membership of a pixel to either group is predetermined (the well known EXR pixel layout), while in case of the Rambus’ BinaryPixel technology the membership is decided based on the actual exposure process taking place.

3, Practice has shown that EXR approach works in expanding the DR, but some sensor area is actually wasted. The Rambus technology essentially means you’ll get the benefits of EXR without the infamous EXR drop in resolution.


NinjaSocks

Looks promising even if the name of the company that created it sounds like a porn site.


Osiris30

Rambus is a company that started out designing memory chips.. they were adopted by Intel for the very first Pentium 4s. They were expense, hard to make and had tech licensing costs that were way too high. The rest of the industry shunned Rambus memory and the company nearly went broke…


OniMirage

More specifically they were makers of extremely fast bandwidth memory that yes was expensive. It was used on high end systems that required bandwidth from cached processes vs low latency ram. The PS2 and PS3 used rambus technology. GDDR is based on the theory of high bandwidth memory used in high end graphics cards.


mr_ewok

ehm, 180px width preview images? srsly?


tkbslc

Proof of concept. Sensor fab is expensive.


AngryCorgi3G

Me like. But is this the same Rambus that came up with RDRAM?? These guys are known patent trolls. It’s possible that nothing substantial may come of this.


KrisPix

Unfortunately this is the same Rambus … one of the few companies I would never work for.


Osiris30

Ding ding ding, we have a winner. Even if they do try and license it, the cost will be huge, just like the RDRAM debacle.


tkbslc

That RAM thing was a decade ago, guys.


AngryCorgi3G

It was long ago, but Rambus garnered the “Patent Troll” reputation since.

http://en.wikipedia.org/wiki/Rambus


Kim Letkeman

A brilliant idea … ultimately far more promising than, say, Fuji’s EXR technology because it enables much better exposures in hyper contrasty situations (expose for the shadows could become the norm.) Further, it does not use kinky-weird demosaicing algorithms and thus promises much cleaner images than specialized filter patterns etc. A great first step.


s_grins

This is a very promising first step that leads to new frontiers.
I have a time to wait for further developments


AV Janus

That picture looks familiar…
is that just a simulation or did they actually take that shot?


neo_nights

Well, it says “Image comparison illustrating the theoretical benefits of the Binary Pixel Imager “. So I think it’s just a simulation.


mgrum

It’s just rough a mock up for people who don’t know what dynamic range is. A point missed by at least a third of the comments here.


ageha

Of course it wasn’t taken by the sensor, they even said that. How can a 128×128 pixel sensor take this shot?


GURL

Besides cost and megapixels availability, the main point is whether or not usual low dynamic images people are used to get when using a phone are possible. If the answer is yes this should help to solve the “flash is not powerful enough” problem.


HowaboutRAW

And since this sensor is not actually in any cell phone cameras, the cell phone/smart phone camera makers could improve the images from existing gear by allowing the capture of raw data. (No new unperfected sensor needed, just a software update for the phone.)


hc44

Hey critics, they’ve said their prototype sensor is 128 x 128. The sample above is bigger than that and is described as a theoretical comparison.

So that ain’t even it!


Nikonworks

Of course the colors are muted, notice the deep shadows they are in.

Everyday I deal with people sticking their cell phones trying to get my shot.

This tecnology will make matters worse, for me. But should enable those cell phone users to get much better shots than they are getting now.

For me Light is what keeps me ahead of the cell phone users.
Now they will edge closer in results.

Instagram and other sites better gear up for a large increase in uploads.

More exchanged photos can only help this world of ours.


expressivecanvas

This looks horrendous… of course, the “Current” imager example looks horrendous too but on the opposite extreme. This is worthy of publishing in DPReview? Gimme a break!


Devendra

lots of things look horrendous when starting.


Aibenq

maybe, those current mobile imager, actualy the REAL result of our imager. remember, after our sensor capture the image, our phone process the RAW file before they show final result which we see it as JPEG files.

so that CURRENT MOBILE IMAGER sample are un-processed raw image.


HowaboutRAW

Aibenq–

Um, well current cell phone cameras toss out a lot of raw data, if that’s what you mean by “process”.

I’d prefer to process my own data. (True for any digital camera, yes including the Fuji XTrans sensored cameras.)

Now with this still proof of concept sensor, we don’t have access to the raw data so we can’t really draw conclusions about what’s in the raw files from this demonstration unit.


mgrum

If you read the article you’ll have noticed that the images posted are an “illustration” not the actual result (the prototype is only 128×128 pixels).


Jan Privat

We have now come to the point, where cellphone camera innovations push digital photography. LOL. But okay, lezz go!


neo_nights

It’s simply because smartphone cameras are more popular than ‘regular’ P&S. Afterall, all the new tech we know (or at least most of them) are tested before on small sensored cameras and then go to a more advanced level.


ageha

That happened long time ago and makes total sense.


OldArrow

If it gets to be user-adjustable and visible in setting up the shot (e.g., not effective only during exposure), from 0 to the level shown in the samples above, I think it may have a significal potential. The way it was presented, it would require the same amount of PP as all other high-contrasting images…


anthonyGR

Guys stop trying to imaging what this will do to your DSLR photography. This is aimed at cellphone cameras. It’s for teenagers taking shots of their buddies LOLing and wanting to capture some of the background too. The colors being muted, or badly tonemapped is irrelevant here.


abrunete

It doesn’t really look HDR-ish to me, indeed you get the shadows but it looks kinda washed out in the example shown here.
But in principle, it’s promising…


madsector

And what exactly is the difference to Fujis EXR technology?


AEndrs

Have you actually read the article? (And the source?)


mgrum

EXR wont introduce weird motion artifacts, but is limited in how far the DR can be extended. This approach can potentially yield unlimited DR if the pixels can be reset many times.


dimsgr

exr is about different (e.g. half of sensor) pixels having different exposure times, e.g. half of them having 15sec and rest half 1/60sec, which results in capturing bright as well as dark objects, but obviously introducing synching problems, especially of relatively fast moving objects,
greets


Kuv

I don’t see this working when exposure needs to build up over time and may have motion present (i.e. long exposure landscape shots at bay).


steve_hoge

Remember, all pixels will be gathering light during the same exposure period. You won’t have some shutting off before or after others, so there shouldn’t be any temporal effects.

Source Article from http://www.dpreview.com/news/2013/02/27/rambus-shows-binaryt-pixel-sensor-technology-for-expanded-dynamic-range

Share on FacebookTweet about this on TwitterShare on Google+Pin on PinterestShare on TumblrShare on RedditShare on StumbleUponShare on LinkedInEmail this to someone