The iQOO 11, which was recently released in India, can now be purchased through iQOO’s official Indian website and Amazon.in. It is available in Alpha and Legend colors and has two memory options in India — 8GB/256GB and 16GB/256GB.
The prices for these options are INR59,999 ($740/€680) and INR64,999 ($800/€735) respectively. However, there are exchange bonuses and bank offers that can bring the price down even further.
The iQOO 11 is powered by the Snapdragon 8 Gen 2 SoC, making it the first smartphone in India to have this processor. It runs on the Android 13-based Funtouch OS 13 and has a 6.78″ 144Hz 1440p E6 AMOLED screen. The display has a fingerprint reader underneath and a punch hole in the center for the 16MP front-facing camera.
Brief Review
The iQOO 11, is the latest smartphone from iQOO and is the first in India to be equipped with Qualcomm’s Snapdragon 8 Gen 2, the latest and most advanced chip at the time of its release. The device also features many upgrades, such as its design, display, performance, and battery life, compared to its predecessor, the iQOO 9T.
The iQOO 11 comes in two color options, black “alpha” and white “legend”, and features a faux leather material on the back which is said to feel premium and provide enough grip.
IQ 00 11 smartphone on sale
The device also has a 6.78-inch E6 AMOLED display with a resolution of 1440p and a refresh rate of 144Hz. In terms of performance, the iQOO 11 is powered by the Snapdragon 8 Gen 2 and offers fast and reliable biometrics through its optical fingerprint scanner.
The battery life is also said to be good with a 5,000 mAh battery and support for 120W wired charging. Overall, the iQOO 11 is a well-rounded device with many improvements over its predecessor, making it a solid choice for consumers.
On the back of the device, there is a triple-camera setup consisting of a 50MP primary, 13MP telephoto, and 8MP ultrawide units. Additionally, it has a V2 chip for improved gaming and photography performance.
The device is also fueled by a 5,000 mAh battery and supports 120W wired charging.
Should you buy?
If you’re considering purchasing the iQOO 11, it’s important to keep in mind that it is a typical iQOO phone, with both strengths and weaknesses. The device excels in performance, providing a top-notch flagship experience.
However, it’s worth noting that when the OnePlus 11 and Samsung Galaxy S23 series are released, they may offer comparable or superior performance. Whether the iQOO 11 can compete with them in terms of value will depend on the pricing of those devices.
Scientists estimate that more than 95 percent of Earth’s oceans have never been observed, which means we have seen less of our planet’s ocean than we have the far side of the moon or the surface of Mars.
The high cost of powering an underwater camera for a long time, by tethering it to a research vessel or sending a ship to recharge its batteries, is a steep challenge preventing widespread undersea exploration.
MIT researchers have taken a major step to overcome this problem by developing a battery-free, wireless underwater camera that is about 100,000 times more energy-efficient than other undersea cameras. The device takes color photos, even in dark underwater environments, and transmits image data wirelessly through the water.
The autonomous camera is powered by sound. It converts mechanical energy from sound waves traveling through water into electrical energy that powers its imaging and communications equipment. After capturing and encoding image data, the camera also uses sound waves to transmit data to a receiver that reconstructs the image.
Because it doesn’t need a power source, the camera could run for weeks on end before retrieval, enabling scientists to search remote parts of the ocean for new species. It could also be used to capture images of ocean pollution or monitor the health and growth of fish raised in aquaculture farms.
“One of the most exciting applications of this camera for me personally is in the context of climate monitoring. We are building climate models, but we are missing data from over 95 percent of the ocean. This technology could help us build more accurate climate models and better understand how climate change impacts the underwater world,” says Fadel Adib, associate professor in the Department of Electrical Engineering and Computer Science and director of the Signal Kinetics group in the MIT Media Lab, and senior author of the paper.
Joining Adib on the paper are co-lead authors and Signal Kinetics group research assistants Sayed Saad Afzal, Waleed Akbar, and Osvy Rodriguez, as well as research scientist Unsoo Ha, and former group researchers Mario Doumet and Reza Ghaffarivardavagh. The paper is published in Nature Communications.
The battery-free, wireless underwater camera could help scientists explore unknown regions of the ocean, track pollution, or monitor the effects of climate change./CREDIT-Image: Adam Glanzman
Going battery-free
To build a camera that could operate autonomously for long periods, the researchers needed a device that could harvest energy underwater on its own while consuming very little power.
The camera acquires energy using transducers made from piezoelectric materials that are placed around its exterior. Piezoelectric materials produce an electric signal when a mechanical force is applied to them. When a sound wave traveling through the water hits the transducers, they vibrate and convert that mechanical energy into electrical energy.
Those sound waves could come from any source, like a passing ship or marine life. The camera stores harvested energy until it has built up enough to power the electronics that take photos and communicate data.
To keep power consumption as a low as possible, the researchers used off-the-shelf, ultra-low-power imaging sensors. But these sensors only capture grayscale images. And since most underwater environments lack a light source, they needed to develop a low-power flash, too.
They solved both problems simultaneously using red, green, and blue LEDs. When the camera captures an image, it shines a red LED and then uses image sensors to take the photo. It repeats the same process with green and blue LEDs.
Even though the image looks black and white, the red, green, and blue colored light is reflected in the white part of each photo, Akbar explains. When the image data are combined in post-processing, the color image can be reconstructed.
Nature/water/Ians
Sending data with sound
Once image data are captured, they are encoded as bits (1s and 0s) and sent to a receiver one bit at a time using a process called underwater backscatter. The receiver transmits sound waves through the water to the camera, which acts as a mirror to reflect those waves. The camera either reflects a wave back to the receiver or changes its mirror to an absorber so that it does not reflect back.
A hydrophone next to the transmitter senses if a signal is reflected back from the camera. If it receives a signal, that is a bit-1, and if there is no signal, that is a bit-0. The system uses this binary information to reconstruct and post-process the image.
“This whole process, since it just requires a single switch to convert the device from a nonreflective state to a reflective state, consumes five orders of magnitude less power than typical underwater communications systems,” Afzal says.
The researchers tested the camera in several underwater environments. In one, they captured color images of plastic bottles floating in a New Hampshire pond. They were also able to take such high-quality photos of an African starfish that tiny tubercles along its arms were clearly visible. The device was also effective at repeatedly imaging the underwater plant Aponogeton ulvaceus in a dark environment over the course of a week to monitor its growth.
Now that they have demonstrated a working prototype, the researchers plan to enhance the device so it is practical for deployment in real-world settings. They want to increase the camera’s memory so it could capture photos in real-time, stream images, or even shoot underwater video.
They also want to extend the camera’s range. They successfully transmitted data 40 meters from the receiver, but pushing that range wider would enable the camera to be used in more underwater settings.
This research is supported, in part, by the Office of Naval Research, the Sloan Research Fellowship, the National Science Foundation, the MIT Media Lab, and the Doherty Chair in Ocean Utilization.
When we breathe in, our lungs fill with oxygen, which is distributed to our red blood cells for transportation throughout our bodies. Our bodies need a lot of oxygen to function, and healthy people have at least 95% oxygen saturation all the time.
Conditions like asthma or COVID-19 make it harder for bodies to absorb oxygen from the lungs. This leads to oxygen saturation percentages that drop to 90% or below, an indication that medical attention is needed.
In a clinic, doctors monitor oxygen saturation using pulse oximeters — those clips you put over your fingertip or ear. But monitoring oxygen saturation at home multiple times a day could help patients keep an eye on COVID symptoms, for example.
In a proof-of-principle study, University of Washington and University of California San Diego researchers have shown that smartphones are capable of detecting blood oxygen saturation levels down to 70%. This is the lowest value that pulse oximeters should be able to measure, as recommended by the U.S. Food and Drug Administration.
The technique involves participants placing their finger over the camera and flash of a smartphone, which uses a deep-learning algorithm to decipher the blood oxygen levels. When the team delivered a controlled mixture of nitrogen and oxygen to six subjects to artificially bring their blood oxygen levels down, the smartphone correctly predicted whether the subject had low blood oxygen levels 80% of the time.
In a proof-of-principle study, University of Washington and University of California San Diego researchers have shown that smartphones are capable of detecting blood oxygen saturation levels down to 70%. The technique involves having participants place their finger over the camera and flash of a smartphone, which uses a deep-learning algorithm to decipher the blood oxygen levels from the blood flow patterns in the resulting video./Photo:Dennis Wise/University of Washington
“Other smartphone apps that do this were developed by asking people to hold their breath. But people get very uncomfortable and have to breathe after a minute or so, and that’s before their blood-oxygen levels have gone down far enough to represent the full range of clinically relevant data,” said co-lead author Jason Hoffman, a UW doctoral student in the Paul G. Allen School of Computer Science & Engineering. “With our test, we’re able to gather 15 minutes of data from each subject. Our data shows that smartphones could work well right in the critical threshold range.”
Another benefit of measuring blood oxygen levels on a smartphone is that almost everyone has one.
“This way you could have multiple measurements with your own device at either no cost or low cost,” said co-author Dr. Matthew Thompson, professor of family medicine in the UW School of Medicine. “In an ideal world, this information could be seamlessly transmitted to a doctor’s office. This would be really beneficial for telemedicine appointments or for triage nurses to be able to quickly determine whether patients need to go to the emergency department or if they can continue to rest at home and make an appointment with their primary care provider later.”
The team recruited six participants ranging in age from 20 to 34. Three identified as female, three identified as male. One participant identified as being African American, while the rest identified as being Caucasian.
To gather data to train and test the algorithm, the researchers had each participant wear a standard pulse oximeter on one finger and then place another finger on the same hand over a smartphone’s camera and flash. Each participant had this same set up on both hands simultaneously.
“The camera records how much that blood absorbs the light from the flash in each of the three color channels it measures: red, green and blue,” said Wang, who also directs the UC San Diego DigiHealth Lab. “Then we can feed those intensity measurements into our deep-learning model.”
Each participant breathed in a controlled mixture of oxygen and nitrogen to slowly reduce oxygen levels. The process took about 15 minutes. For all six participants, the team acquired more than 10,000 blood oxygen level readings between 61% and 100%.
“Smartphone light can get scattered by all these other components in your finger, which means there’s a lot of noise in the data that we’re looking at,” said co-lead author Varun Viswanath, a UW alumnus who is now a doctoral student advised by Wang at UC San Diego. “Deep learning is a really helpful technique here because it can see these really complex and nuanced features and helps you find patterns that you wouldn’t otherwise be able to see.”
Xiaomi Mi 7 is all set for a grand release with in-display fingerprint scanner, Snapdragon 845 chipset among other features and a Chinese website has leaked the picture with the possible date of its release.
The Chinese smartphone maker Xiaomi, who is going to IPO in Hong Kong exchange currently, was supposed to have announced its flagship Mi 7 at the Mobile World Congress 2018 held from February 26 to March 1 but it has disappointed fans with no announcement on Mi 7.
Now the reports have surfaced in China and elsewhere that Xiaomi might announce the Mi 6 successor this month itself, perhaps once the IPO funding is done with the exchange.
But a Weibo post on the Chinese technology giant has revealed the release date of the Mi 7 will be May 23, 2018. The website has also published a photo of the device with the Mi logo on it.
Since, it is more than year since its predecessor Mi6 was released, the news cannot be brushed aside as another fake news. Xiaomi never shied away from releasing at least one new version of all the brands it sells in the market.
According to Weibo post, the Xiaomi Mi 7 will feature an in-display fingerprint scanner and notch up top. It could be sporting either a 5.65-inch or a 5.8-inch bezel-less AMOLED display with 2,560×1,440 pixels screen resolution against 5.15-inch (1080 x 1920 pixels) display seen in the Mi 6.
Under the hood, the flagship is likely to have a Qualcomm Snapdragon 845 processor, a 6GB/8GB RAM, a 128GB/256GB internal storage, a dual 19MP+19MP main camera, and a 4,480mAh battery.
Computer scientists at the University of Waterloo have developed a smartphone app that helps people learn the art of taking great selfies.
Inside the app is an algorithm that directs the user where to position the camera allowing them to take the best shot possible.
“Selfie’s have increasingly become a normal way for people to express themselves and their experiences, only not all selfies are created equal,” said Dan Vogel, a professor of computer science at Waterloo. “Unlike other apps that enhance a photo after you take it, this system gives direction, meaning the user is actually learning why their photo will be better.”
In developing the algorithm, Vogel and Qifan Li, a former Master’s student at Waterloo, bought 3D digital scans of “average” looking people. They took hundreds of “virtual selfies” by writing code to control a virtual smartphone camera and computer-generated lighting which allowed them to explore different composition principles, including lighting direction, face position and face size.
Using an online crowdsourcing service, the researchers had thousands of people vote on which of the virtual selfie photos they felt were best, and then mathematically modelled the patterns of votes to develop an algorithm that can guide people to take the best selfie.
They later had real people take selfies with a standard camera app, and an app powered by the algorithm. Based on more online ratings, they found a 26 per cent improvement in selfies taken with Waterloo’s app.
“This is just the beginning of what is possible,” said Vogel. “We can expand the variables to include variables aspects such as hairstyle, types of smile or even the outfit you wear.
“When it comes to teaching people to take better selfies, the sky’s the limit.”
Vogel and Li recently presented the work in Edinburgh, Scotland at the 2017 ACM Conference on Designing Interactive Systems.
###
To review video of how this app works, visit: http
When taking a picture, a photographer must typically commit to a composition that cannot be changed after the shutter is released. For example, when using a wide-angle lens to capture a subject in front of an appealing background, it is difficult to include the entire background and still have the subject be large enough in the frame.
Positioning the subject closer to the camera will make it larger, but unwanted distortion can occur. This distortion is reduced when shooting with a telephoto lens, since the photographer can move back while maintaining the foreground subject at a reasonable size. But this causes most of the background to be excluded. In each case, the photographer has to settle for a suboptimal composition that cannot be modified later.
As described in a technical paper to be presented July 31 at the ACM SIGGRAPH 2017 conference, UC Santa Barbara Ph.D. student Abhishek Badki and his advisor Pradeep Sen, a professor in the Department of Electrical and Computer Engineering, along with NVIDIA researchers Orazio Gallo and Jan Kautz, have developed a new system that addresses this problem. Specifically, it allows photographers to compose an image post-capture by controlling the relative positions and sizes of objects in the image.
Computational Zoom, as the system is called, allows photographers the flexibility to generate novel image compositions — even some that cannot be captured by a physical camera — by controlling the sense of depth in the scene, the relative sizes of objects at different depths and the perspectives from which the objects are viewed.
For example, the system makes it possible to automatically combine wide-angle and telephoto perspectives into a single multi-perspective image, so that the subject is properly sized and the full background is visible. In a standard image, the light rays travel in straight lines into the camera at an angle specified by the focal length of the lens (the field of view angle). However, this new functionality allows photographers to produce physically impossible images in which the light rays “bend,” changing from a telephoto to a wide angle as they go through the scene.
Achieving the custom composition is a three-step process. First, the photographer must capture a “stack” of multiple images, moving the camera gradually closer to the scene between shots without changing the focal length of the lens. The system then uses the captured image stack, and a standard structure-from-motion algorithm, to automatically estimate the camera position and orientation for each image. Next, a novel multi-view 3D reconstruction method estimates “depth maps” for each image in the stack. Finally, all of this information is used to synthesize multi-perspective images which have novel compositions through a user interface.
“This new framework really empowers photographers by giving them much more flexibility later on to compose their desired shot,” said Pradeep Sen. “It allows them to tell the story they want to tell.”
“Computational Zoom is a powerful technique to create compelling images,” said Gallo, NVIDIA senior research scientist. “Photographers can manipulate a composition in real time, developing plausible images that cannot be captured with a physical camera.”
Eventually, the researchers hope to integrate the system as a plug-in to existing image-processing software, allowing a new kind of post-capture compositional freedom for professional and amateur photographers alike.
Find out more about the project, or see results of the post-capture method on YouTube.