Toyota is not in a hurry when it comes to electrification of its range. And it is a little bit weird, because other companies are rushing in with their new electric car models. In fact, the boss of Toyota, Akio Toyoda, argues against rushed electrification of personal transportation. But now Toyota is introducing a new electric car C+pod – just look how tiny it is!
Toyota C+pod is a 2,5 metre long 2-seater city car, created for cheap personal transportation. Toyota is planning to sell these cars to various organizations in order to promote the popularity of electric transportation. You can imagine these cars being rented for tourists or being used in various other ride-sharing services. In 2022 C+pod should be available for everyone.
Because Toyota C+pod is pretty much intended to be used exclusively in the city, it is pretty slow. In fact, its top speed is just 60 kph. And while cruising it can stretch the resources of its 9 kWh battery to 150 km. You’re feeling some range anxiety already, aren’t you? Well, multiple studies have shown that most people in the cities do not even cover those 150 km in one day. It will be more than enough of range for young people, who are just driving to school, job and a movie theatre. Charging won’t take too much time either – 5 hours on a single phase, 200V/16A network, or 16 hours on 100V/6A.
Not to mention that Toyota C+pod should be fun to drive, because its 9.2 kW motor is situated on the rear axle and is driving the rear wheels. Depending on the configuration, C+pod will weigh 670-690 kg.
Although C+pod is going to be sold in Japan only and it is meant for the younger population, its 1100 mm wide cabin will easily seat two adults. C+pod’s interior layout is simple, but functional. Toyota C+pod even has LED headlight and taillights. The outer layer of the body is made of plastic, but you would expect that given the tiny price of this car.
Toyota C+pod will cost from 1,650,000 yen or just 16 thousand dollars (13k euros). For this amount of money you’re getting a whole load of safety equipment, including the Pre-collision Safety System, which detects other vehicles (day and night), pedestrians (day and night), and cyclists (day), Intelligent Clearance Sonar with Parking Support Brakes, front disc brakes and many other systems.
Looking at the Toyota C+pod one word comes to mind – friendly. It is just a friendly tiny vehicle, perfectly suited for an urban lifestyle.
Redmi Note 10 Launch Teased Officially After Rumours Tipping February Debut in India
Redmi Note 10 launch has been officially teased on Weibo. The new development comes just weeks after the rumour mill suggested the existence of the Redmi Note 10 series that could include the Redmi Note 10, the Redmi Note 10 Pro, and the Redmi Note 10 Pro 5G. The new series is expected to succeed the Redmi Note 9 family that debuted with the launch of the Redmi Note 9 Pro and the Redmi Note 9 Pro Max in India in March last year.
Redmi General Manager Lu Weibing has teased the launch of the Redmi Note 10 on Weibo. Instead of giving away details of the phone directly, Weibing has posted an image of the Redmi Note 9 4G asking users about their expectations with the Redmi Note 10.
The Redmi Note 10 is speculated to launch in India alongside the Redmi Note 10 Pro in February. Both phones will be priced aggressively, according to tipster Ishan Agarwal. The Redmi Note 10 in the series is tipped to come in Gray, Green, and White colour options.
Although Xiaomi hasn’t provided any specifics about the phone yet, the Redmi Note 10 Pro 5G purportedly received a certification from the Bureau of Indian Standards (BIS) earlier this month. The phone is also said to have surfaced on the US
Federal Communications Commission (FCC) website with the model number M2101K6G. It has also reportedly appeared on the websites of other regulatory bodies including the European Economic Commission (EEC), Singapore’s IMDA, and Malaysia’s MCMC.
Redmi Note 10 series specifications (expected)
The Redmi Note 10 Pro is rumoured to come with a 120Hz display and include the Qualcomm Snapdragon 732G SoC. However, the 5G variant of the Redmi Note 10 Pro is said to come with the Snapdragon 750G SoC. It is speculated to have 6GB and 8GB RAM options as well as 64GB and 128GB storage versions. The Redmi Note 10 Pro models will come with a 64-megapixel primary camera sensor and include a 5,050mAh battery, according to a recent report.
Similar to the Redmi Note 10 Pro models, the Redmi Note 10 is also rumoured to have both 4G and 5G versions. The smartphone is tipped to have a 48-megapixel primary camera sensor and include a 6,000mAh battery.
The Redmi Note 10 Pro and the Redmi Note 10 are both expected to run on Android 11 with MIUI 12 out-of-the-box.
What will be the most exciting tech launch of 2021? We discussed this on Orbital, our weekly technology podcast, which you can subscribe to via Apple Podcasts, Google Podcasts, or RSS, download the episode, or just hit the play button below.
Cybersecurity: Blaming users is not the answer
A punitive approach toward employees reporting data breaches intensifies problems.
Experts are warning, when it comes to cybersecurity, blaming users is a terrible idea. Doing so likely results in creating an even worse situation. “Many organizations have defaulted to a blame culture when it comes to data security,” comments Tony Pepper, CEO of Egress Software Technologies, in an email exchange. “They believe actions have consequences and someone has to be responsible.”
“In cases where employees report incidents of data loss they accidentally caused, it’s quite common for them to face serious negative consequences,” continues Pepper. “This, obviously, creates a culture of fear, leading to a lack of self-reporting, which in turn, exacerbates the problem. Many organizations are therefore unaware of the scale of their security issues.”
Pepper’s comments are based on findings gleaned by the independent market research firm Arlington Research. Analysts interviewed more than 500 upper-level managers from organizations within the financial services, healthcare, banking, and legal sectors.
What the analysts found was published in the paper, Outbound Email Security Report. Regarding employees responsible for a loss of data, 45% of those surveyed would reprimand the employee(s), 25% would likely fire the employee(s).
SEE: Identity theft protection policy (TechRepublic Premium)
Pepper suggests while organizations may believe this decreases the chance of the offense reoccurring, it can have a different and more damaging effect. There’s a chance employees may not report security incidents, to avoid repercussions from company management.
“Especially in these uncertain times, employees are going to be even less willing to self-report, or report others, if they believe they might lose their jobs as the result,” adds Pepper.
It gets worse
According to survey findings, a high percentage of organizations rely on their employees to be the primary data breach detection mechanism–particularly when it comes to email. “Our research found that 62% of organizations rely on people-based reporting to alert management about data breaches,” mentions Pepper. “By reprimanding employees who were only trying to do their job, organizations are undermining the reporting mechanism and ensuring incidents will go unreported.”
The lack of truly understanding why data is escaping the digital confines of an organization makes it hugely difficult for those in charge of cybersecurity to develop a defensive strategy that will effectively protect an organization’s data.
Overcome the blame game
Once it is understood that reprimanding employees is ineffective, organizations should look to create a more positive security culture. One immediate benefit is the increased visibility of heretofore unknown security risks.
Another benefit is the ability to show regulatory bodies the organization has taken all reasonable steps to protect sensitive data. Pepper adds, “If you don’t know where your risks are, it’s hard to put reasonable measures in place. Regulators could surmise that during a data breach investigation and levy higher fines and penalties.”
Technology has a role
Once the blame game is curtailed, it’s time to get technology involved. “The first step is to get reporting right, using technology, not people, which will remove the pressure of self-reporting from employees and place the responsibility firmly in the hands of those in charge of cybersecurity,” suggests Pepper. “Advances in contextual machine learning mean it’s possible for security tools to understand users and learn from their actions, so they can detect and mitigate abnormal behavior–for example, adding an incorrect recipient to an email.”
This is where technology makes all the difference. It prevents accidental data loss before it can happen. It empowers employees to be part of the solution, and technology gives the security team unbiased visibility of risks and emerging threats.
What cybersecurity teams need to understand
Education about potential consequences is vital. Anyone working with the organization’s digital assets needs to understand the possible outcomes from a data breach–for example, regulatory fines or damage to the organization’s reputation.
It’s a safe bet when users understand the consequences of emailing client data to the wrong recipient or responding to a phishing email, they’ll be much more likely to report the incident if and when it occurs. Remember: If an incident isn’t reported, there’s no way to remediate it or prevent it from happening again.
Pepper, in conclusion, offers advice to those managing cybersecurity. “The best way to engage employees with security, and ensure they understand its importance, is to create a ‘security-positive’ company culture,” explains Pepper. “Security teams need to reassure the wider organization that, while data breaches are to be taken seriously, employees who report accidental incidents will receive appropriate support from the business and not face severe repercussions.”
ArtEmis: Affective Language for Visual Art
Most of the annotation datasets in computer vision focus on objective and content-based applications. A recent paper on arXiv.org investigates an underexplored problem of the relationship between visual content and its emotional effect expressed through language.
A dataset of emotional reactions to visual artwork in natural language is collected. The annotators expressed moods, feelings, personal attitudes, and abstract concepts like freedom. Psychological interpretations were explained and linked with visual attributes.
Some of the examples even include metaphorical descriptions relative to subjective experience (like ‘it reminds me of my grandmother’). Further potential is demonstrated by creating neural speakers trained with the dataset. Some of the speakers were able to produce grounded visual explanations and fared reasonably well in the Turing test.
We present a novel large-scale dataset and accompanying machine learning models aimed at providing a detailed understanding of the interplay between visual content, its emotional effect, and explanations for the latter in language. In contrast to most existing annotation datasets in computer vision, we focus on the affective experience triggered by visual artworks and ask the annotators to indicate the dominant emotion they feel for a given image and, crucially, to also provide a grounded verbal explanation for their emotion choice. As we demonstrate below, this leads to a rich set of signals for both the objective content and the affective impact of an image, creating associations with abstract concepts (e.g., “freedom” or “love”), or references that go beyond what is directly visible, including visual similes and metaphors, or subjective references to personal experiences. We focus on visual art (e.g., paintings, artistic photographs) as it is a prime example of imagery created to elicit emotional responses from its viewers. Our dataset, termed ArtEmis, contains 439K emotion attributions and explanations from humans, on 81K artworks from WikiArt. Building on this data, we train and demonstrate a series of captioning systems capable of expressing and explaining emotions from visual stimuli. Remarkably, the captions produced by these systems often succeed in reflecting the semantic and abstract content of the image, going well beyond systems trained on existing datasets. The collected dataset and developed methods are available at this https URL.
Research article: Achlioptas, P., Ovsjanikov, M., Haydarov, K., Elhoseiny, M., and Guibas, L., “ArtEmis: Affective Language for Visual Art”, arXiv:2101.07396. Link: https://arxiv.org/abs/2101.07396
- Technology8 months ago
First iPhone jailbreak in four years released
- Technology6 months ago
Is OnePlus Nord the Best Phone Under Rs. 30,000?
- Technology8 months ago
The Complete Guide for Building a Website
- Technology8 months ago
Check out the new Gaming Leader: Playstation 5
- Entertainment6 months ago
Jack Harlow Denies JW Lucas’ Credit in Hit ‘Whats Poppin’ After Controversial Breonna Taylor Remarks
- Entertainment6 months ago
Gwyneth Paltrow Names Rob Lowe’s Wife as Her Mentor in Giving Blow Job
- Entertainment6 months ago
Billie Eilish Reflects on Self-Growth on Sweet New Song ‘My Future’
- Entertainment6 months ago
Grimes Slams Baby Daddy Elon Musk After He Tweets ‘Pronouns Suck’