Connect with us

In February of 2013, Eric Loomis was driving around in the small town of La Crosse in Wisconsin, US, when he was stopped by the police. The car he was driving turned out to have been involved in a shooting, and he was arrested. Eventually a court sentenced him to six years in prison.

This might have been an uneventful case, had it not been for a piece of technology that had aided the judge in making the decision. They used COMPAS, an algorithm that determines the risk of a defendant becoming a recidivist. The court inputs a range of data, like the defendant’s demographic information, into the system, which yields a score of how likely they are to again commit a crime.

How the algorithm predicts this, however, remains non-transparent. The system, in other words, is a black box – a practice against which Loomis made a 2017 complaint in the US Supreme Court. He claimed COMPAS used gender and racial data to make its decisions, and ranked Afro-Americans as higher recidivism risks. The court eventually rejected his case, claiming the sentence would have been the same even without the algorithm. Yet there have also been a number of revelations which suggest COMPAS doesn’t accurately predict recidivism.

Adoption

While algorithmic sentencing systems are already in use in the US, in Europe their adoption has generally been limited. A Dutch AI sentencing system, that judged on private cases like late payments to companies, was for example shut down in 2018 after critical media coverage. Yet AI has entered into other fields across Europe. It is being rolled out to help European doctors diagnose Covid-19. And start-ups like the British M:QUBE, which uses AI to analyse mortgage applications, are popping up fast.

These systems run historical data through an algorithm, which then comes up with a prediction or course of action. Yet often we don’t know how such a system reaches its conclusion. It might work correctly, or it might have a technical error inside of it. It might even reproduce some form of bias, like racism, without the designers even realising it.

This is why researchers want to open this black box, and make AI systems transparent, or ‘explainable’, a movement that is now picking up steam. The EU White Paper on Artificial Intelligence released earlier this year called for explainable AI, major companies like Google and IBM are funding research into it and GDPR even includes a right to explainability for consumers.

‘We are now able to produce AI models that are very efficient in making decisions,’ said Fosca Giannotti, senior researcher at the Information Science and Technology Institute of the National Research Council in Pisa, Italy. ‘But often these models are impossible to understand for the end-user, which is why explainable AI is becoming so popular.’

Diagnosis

Giannotti leads a research project on explainable AI, called XAI, which wants to make AI systems reveal their internal logic. The project works on automated decision support systems like technology that helps a doctor make a diagnosis or algorithms that recommend to banks whether or not to give someone a loan. They hope to develop the technical methods or even new algorithms that can help make AI explainable.

‘Humans still make the final decisions in these systems,’ said Giannotti. ‘But every human that uses these systems should have a clear understanding of the logic behind the suggestion. ’

Today, hospitals and doctors increasingly experiment with AI systems to support their decisions, but are often unaware of how the decision was made. AI in this case analyses large amounts of medical data, and yields a percentage of likelihood a patient has a certain disease.

For example, a system might be trained on large amounts of photos of human skin, which in some cases represent symptoms of skin cancer. Based on that data, it predicts whether someone is likely to have skin cancer from new pictures of a skin anomaly. These systems are not general practice yet, but hospitals are increasingly testing them, and integrating them in their daily work.

These systems often use a popular AI method called deep learning, that takes large amounts of small sub-decisions. These are grouped into a network with layers that can range from a few dozen up to hundreds deep, making it particularly hard to see why the system suggested someone has skin cancer, for example, or to identify faulty reasoning.

‘Sometimes even the computer scientist who designed the network cannot really understand the logic,’ said Giannotti.

Natural language

For Senén Barro, professor of computer science and artificial intelligence at the University of Santiago de Compostela in Spain, AI should not only be able to justify its decisions but do so using human language.

‘Explainable AI should be able to communicate the outcome naturally to humans, but also the reasoning process that justifies the result,’ said Prof. Barro.

He is scientific coordinator of a project called NL4XAI which is training researchers on how to make AI systems explainable, by exploring different sub-areas such as specific techniques to accomplish explainability.

He says that the end result could look similar to a chatbot. ‘Natural language technology can build conversational agents that convey these interactive explanations to humans,’ he said.

Another method to give explanations is for the system to provide a counterfactual. ‘It might mean that the system gives an example of what someone would need to change to alter the solution,’ said Giannotti. In the case of a loan-judging algorithm, a counterfactual might show to someone whose loan was denied what the nearest case would be where they would be approved. It might say that someone’s salary is too low, but if they earned €1,000 more on a yearly basis, they would be eligible.

White box

Giannotti says there are two main approaches to explainability. One is to start from black box algorithms, which are not capable of explaining their results themselves, and find ways to uncover their inner logic. Researchers can attach another algorithm to this black box system – an ‘explanator’ – which asks a range of questions of the black box and compares the results with the input it offered. From this process the explanator can reconstruct how the black box system works.

‘But another way is just to throw away the black box, and use white box algorithms, ’ said Giannotti. These are machine learning systems that are explainable by design, yet often are less powerful than their black box counterparts.

‘We cannot yet say which approach is better,’ cautioned Giannotti. ‘The choice depends on the data we are working on.’ When analysing very big amounts of data, like a database filled with high-resolution images, a black box system is often needed because they are more powerful. But for lighter tasks, a white box algorithm might work better.

Finding the right approach to achieving explainability is still a big problem though. Researchers need to find technical measures to see whether an explanation actually explains a black-box system well. ‘The biggest challenge is on defining new evaluation protocols to validate the goodness and effectiveness of the generated explanation,’ said Prof. Barro of NL4XAI.

On top of that, the exact definition of explainability is somewhat unclear, and depends on the situation in which it is applied. An AI researcher who writes an algorithm will need a different kind of explanation compared to a doctor who uses a system to make medical diagnoses.

‘Human evaluation (of the system’s output) is inherently subjective since it depends on the background of the person who interacts with the intelligent machine,’ said Dr Jose María Alonso, deputy coordinator of NL4XAI and also a researcher at the University of Santiago de Compostela.

Yet the drive for explainable AI is moving along step by step, which would improve cooperation between humans and machines. ‘Humans won’t be replaced by AI,’ said Giannotti. ‘They will be amplified by computers. But explanation is an important precondition for this cooperation.’

The research in this article was funded by the EU.

Written by Tom Cassauwers

This article was originally published in Horizon, the EU Research and Innovation magazine.




Source link

0
Continue Reading

Technology

Diamonds may help measuring thermal conductivity in living cells

diam

Scientists have very precise instruments, but measuring properties of tiny little cells is still very difficult. Now researchers at the University of Queensland have developed a new tool to measure heat transfer inside living cells. It includes actual diamonds and it can work as both a heater and a thermometre. Someday it can improve cancer diagnosis.

Diamonds may help measuring thermal conductivity in living cells

Diamonds are essentially very hard pieces of carbon, which makes them great for some scientific applications. Image credit: En-cas-de-soleil via (Wikimedia(CC BY-SA 4.0)

Cancer cells are different – they behave differently and exhibit different properties. Scientists have long speculated that in some cases precisely targeted thermal therapies could be very effective against cancer. However, in order for this to become reality scientists needed to know thermal conductivity of living cells. With current technology it is literally impossible to measure thermal conductivity – the rate that heat can flow through an object if one side is hot and another is cold – inside of such tiny living things as cells.

Scientists from Australia, Japan and Singapore now employed nanodiamonds (just tiny little diamonds) to act as minute sensors in a new system. Diamonds are great, because they are very hard and because they are just a different form of carbon, which is very well-known to us. Scientists coated their nanodiamonds with a special heat-releasing polymer. This resulted in a sensor, which can act as a heater or a thermometre, depending on what kind of laser light is applied. This sensor allows measuring thermal conductivity in living cells with a resolution of 200 nanometres.

Associate Professor Taras Plakhotnik, lead author of the study, said that this new method already revealed some new interesting information about cells. He said: “We found that the rate of heat diffusion in cells, as measured in our experiments, was several times slower than in pure water, for example.”

If cancer cells and healthy cells exhibit different thermal conductivity, this kind of measurement could become a very precise diagnostic technique. Also, because these particles are not toxic and can be used in living cells, scientists think they could open the door for  improving heat-based treatments for cancer. Measuring head conductivity could help monitor biochemical reactions in real time in the cell. But that’s not all. Scientists think that this method could lead to a better understanding of metabolic disorders, such as obesity.

Diamonds are commonly used in science and industry. People oftentimes see them as something from the jewelry world, but they are much more common elsewhere. And they are not even that expensive. Hopefully, this study will result in a new method to research living cells and maybe some novel therapies as well.

 

Source: University of Queensland




Source link

0
Continue Reading

Technology

Redmi Note 10 Launch Teased Officially After Rumours Tipping February Debut in India

Redmi Note 10 launch has been officially teased on Weibo. The new development comes just weeks after the rumour mill suggested the existence of the Redmi Note 10 series that could include the Redmi Note 10, the Redmi Note 10 Pro, and the Redmi Note 10 Pro 5G. The new series is expected to succeed the Redmi Note 9 family that debuted with the launch of the Redmi Note 9 Pro and the Redmi Note 9 Pro Max in India in March last year.

Redmi General Manager Lu Weibing has teased the launch of the Redmi Note 10 on Weibo. Instead of giving away details of the phone directly, Weibing has posted an image of the Redmi Note 9 4G asking users about their expectations with the Redmi Note 10.

The Redmi Note 10 is speculated to launch in India alongside the Redmi Note 10 Pro in February. Both phones will be priced aggressively, according to tipster Ishan Agarwal. The Redmi Note 10 in the series is tipped to come in Gray, Green, and White colour options.

Although Xiaomi hasn’t provided any specifics about the phone yet, the Redmi Note 10 Pro 5G purportedly received a certification from the Bureau of Indian Standards (BIS) earlier this month. The phone is also said to have surfaced on the US

Federal Communications Commission (FCC) website with the model number M2101K6G. It has also reportedly appeared on the websites of other regulatory bodies including the European Economic Commission (EEC), Singapore’s IMDA, and Malaysia’s MCMC.

Redmi Note 10 series specifications (expected)

The Redmi Note 10 Pro is rumoured to come with a 120Hz display and include the Qualcomm Snapdragon 732G SoC. However, the 5G variant of the Redmi Note 10 Pro is said to come with the Snapdragon 750G SoC. It is speculated to have 6GB and 8GB RAM options as well as 64GB and 128GB storage versions. The Redmi Note 10 Pro models will come with a 64-megapixel primary camera sensor and include a 5,050mAh battery, according to a recent report.

Similar to the Redmi Note 10 Pro models, the Redmi Note 10 is also rumoured to have both 4G and 5G versions. The smartphone is tipped to have a 48-megapixel primary camera sensor and include a 6,000mAh battery.

The Redmi Note 10 Pro and the Redmi Note 10 are both expected to run on Android 11 with MIUI 12 out-of-the-box.


What will be the most exciting tech launch of 2021? We discussed this on Orbital, our weekly technology podcast, which you can subscribe to via Apple Podcasts, Google Podcasts, or RSS, download the episode, or just hit the play button below.

Source link

0
Continue Reading

Technology

Cybersecurity: Blaming users is not the answer

istock 1166333977

A punitive approach toward employees reporting data breaches intensifies problems.

Image: iStock/iBrave

Experts are warning, when it comes to cybersecurity, blaming users is a terrible idea. Doing so likely results in creating an even worse situation. Many organizations have defaulted to a blame culture when it comes to data security,” comments Tony Pepper, CEO of Egress Software Technologies, in an email exchange. “They believe actions have consequences and someone has to be responsible.”

“In cases where employees report incidents of data loss they accidentally caused, it’s quite common for them to face serious negative consequences,” continues Pepper. “This, obviously, creates a culture of fear, leading to a lack of self-reporting, which in turn, exacerbates the problem. Many organizations are therefore unaware of the scale of their security issues.”

Pepper’s comments are based on findings gleaned by the independent market research firm Arlington Research. Analysts interviewed more than 500 upper-level managers from organizations within the financial services, healthcare, banking, and legal sectors.

What the analysts found was published in the paper, Outbound Email Security Report. Regarding employees responsible for a loss of data, 45% of those surveyed would reprimand the employee(s), 25% would likely fire the employee(s).

SEE: Identity theft protection policy (TechRepublic Premium)

Pepper suggests while organizations may believe this decreases the chance of the offense reoccurring, it can have a different and more damaging effect. There’s a chance employees may not report security incidents, to avoid repercussions from company management. 

“Especially in these uncertain times, employees are going to be even less willing to self-report, or report others, if they believe they might lose their jobs as the result,” adds Pepper. 

It gets worse 

According to survey findings, a high percentage of organizations rely on their employees to be the primary data breach detection mechanism–particularly when it comes to email. “Our research found that 62% of organizations rely on people-based reporting to alert management about data breaches,” mentions Pepper. “By reprimanding employees who were only trying to do their job, organizations are undermining the reporting mechanism and ensuring incidents will go unreported.”

The lack of truly understanding why data is escaping the digital confines of an organization makes it hugely difficult for those in charge of cybersecurity to develop a defensive strategy that will effectively protect an organization’s data.

Overcome the blame game

Once it is understood that reprimanding employees is ineffective, organizations should look to create a more positive security culture. One immediate benefit is the increased visibility of heretofore unknown security risks.  

Another benefit is the ability to show regulatory bodies the organization has taken all reasonable steps to protect sensitive data. Pepper adds, “If you don’t know where your risks are, it’s hard to put reasonable measures in place. Regulators could surmise that during a data breach investigation and levy higher fines and penalties.” 

Technology has a role

Once the blame game is curtailed, it’s time to get technology involved. “The first step is to get reporting right, using technology, not people, which will remove the pressure of self-reporting from employees and place the responsibility firmly in the hands of those in charge of cybersecurity,” suggests Pepper. “Advances in contextual machine learning mean it’s possible for security tools to understand users and learn from their actions, so they can detect and mitigate abnormal behavior–for example, adding an incorrect recipient to an email.”

This is where technology makes all the difference. It prevents accidental data loss before it can happen. It empowers employees to be part of the solution, and technology gives the security team unbiased visibility of risks and emerging threats. 

What cybersecurity teams need to understand

Education about potential consequences is vital. Anyone working with the organization’s digital assets needs to understand the possible outcomes from a data breach–for example, regulatory fines or damage to the organization’s reputation. 

It’s a safe bet when users understand the consequences of emailing client data to the wrong recipient or responding to a phishing email, they’ll be much more likely to report the incident if and when it occurs. Remember: If an incident isn’t reported, there’s no way to remediate it or prevent it from happening again.

Pepper, in conclusion, offers advice to those managing cybersecurity. “The best way to engage employees with security, and ensure they understand its importance, is to create a ‘security-positive’ company culture,” explains Pepper. “Security teams need to reassure the wider organization that, while data breaches are to be taken seriously, employees who report accidental incidents will receive appropriate support from the business and not face severe repercussions.”

Also see

Source link

0
Continue Reading

Trending