AI Gun Detection Gaining Popularity In U.S.

AI Scanners may become the norm in the United States as Governments, and private companies look to beef up their surveillance options in the wake of rising gun violence. According to a report from the Washington Post, systems like Evolv Technology are becoming increasingly popular. Evolv’s machines are similar to metal detectors but instead use AI and light-emission to detect firearms concealed on a person. 

Evolv claims that this system can detect weapons without the need for the traditional “airport” style system. Those looking to enter through a security checkpoint must empty their pockets and then pass through a metal detector.

The system is gaining traction throughout the US. Mayor of New York City, Eric Adams, suggested using Evolv Technology’s AI weapons detection system on the NYC Subway in the weeks after the Brooklyn subway shooting that saw 23 people injured.

Speaking to Washington Post about the Evolv system, Jamais Cascio, founder of Open the Future, had this to say:

“My concern is what happens when it moves beyond looking for weapons at a concert — when someone decides to add all kinds of inputs on the person being scanned, or if we enter a protest and a government agency can now use the system to track and log us. We know what a metal detector can and can’t tell us. We have no idea how this can be used.”

The idea behind AI detecting and logging firearms is not a new one. Omnilert, another company specializing in AI threat detection, has been integrating its technologies into existing security camera systems since 2020. Tech Giants like Google and Facebook also have their versions of AI weapons detection with their Optical Character Recognition system. This system logs and indexes firearm serial numbers, making them easily accessible by google image search. 

With 2020 & 2021 seeing record numbers of gun-buying and concealed carry permit applications, it seems that governments and corporate entities are seeking to beef up their surveillance and security in response.

Could the upcoming verdict of the Supreme Court’s newest 2nd Amendment case affect the proliferation of AI firearms detection as well?

Via: ZeroHedge

Original story: Washington Post

Privacy Advocates Celebrate “Big Win” Against Facial Recognition Giant Clearview A.I.

A historic settlement filed in court on Monday highlighted the power of Illinois’ strong privacy law and will result in new nationwide restrictions on a controversial technology company infamous for selling access to the largest known database of facial images.

The deal permanently banning Clearview AI from providing most private entities with free or paid access to its database stems from a lawsuit that the ACLU and partners filed in 2020, arguing that the company violated Illinois’ Biometric Information Privacy Act (BIPA).

In addition to permanently banning Clearview from granting private companies and individuals access to the database, the settlement has some state-specific limits. For the next five years, Clearview can’t allow private companies with exceptions under BIPA or state or local government entities in Illinois, including law enforcement, access to the database.

One key exception is that, Clearview will still be able to provide its database to U.S. banks and financial institutions under a carve-out in the Illinois law.

Hoan Ton-That, chief executive of Clearview AI, said the company did “not have plans” to provide the database “to entities besides government agencies at this time.” The settlement does not mean that Clearview cannot sell any product to corporations. It will still be able to sell its facial recognition algorithm, without the database of 20 billion images, to companies.

Nathan Freed Wessler, a deputy director of the ACLU Speech, Privacy, and Technology Project, said that

By requiring Clearview to comply with Illinois’ pathbreaking biometric privacy law not just in the state, but across the country, this settlement demonstrates that strong privacy laws can provide real protections against abuse.”

“Clearview can no longer treat people’s unique biometric identifiers as an unrestricted source of profit,” he said. “Other companies would be wise to take note, and other states should follow Illinois’ lead in enacting strong biometric privacy laws.”

Surveillance Technology Oversight Project executive director Albert Fox Cahn stated “this is a milestone for civil rights, and the ACLU deserves our thanks for once again safeguarding our Constitution.”

Banning Clearview AI in one state is not enough; we need a national ban,” Fox Cahn asserted. “Illinois has long been ahead of the curve in protecting residents from biometric surveillance, but it’s time for the rest of the country to catch up.”

Via: Common Dreams

Mastercard introducing biometric payments that require a face scan

Amid privacy concerns, payments giant Mastercard is rolling out the controversial biometric payment systems, allowing users to pay with the wave of a hand or a smile at a camera. The payments giant said it is rolling out the biometric checkout programs to speed up checkout time and reduce wait times. It also claims that biometric systems are more secure and hygienic than debit and credit cards. ( A.La. exploiting public fears,”Biosecurity” in order to roll in a Cashless society.)

Once enrolled, there is no need to slow down the checkout queue searching through their pockets or bag,” Mastercard said while introducing the controversial tech. “Consumers can simply check the bill and smile into a camera or wave their hand over a reader to pay.

Mastercard will begin testing its biometric checkout systems in Brazil, at five St Marche stores in the city of Sao Paulo. Interested users can register their biometrics via an app or in a store through Mastercard’s partner Payface. A spokesperson said that it will soon roll out the biometric checkout system in the UK, and is also focusing on markets in Asia, Africa, and Latin America.

The system will help Mastercard tap into the biometrics technology industry, which will be worth over $18 billion by 2026, according to KBV Research. Mastercard cited a study that claimed that 74% of the global population has a “positive attitude” towards biometrics.

However, privacy advocates have raised concerns about biometric checkout systems. The (rightful) concern is that the systems will collect and store data that can be used to monitor and track users.

“While it seems Mastercard have taken steps to protect and encrypt this data, as biometric payments become more commonplace the use of such data is likely to evolve and it will inevitably become harder to protect individuals rights to privacy,” – Added Suzie Miles, a partner at lawfirm Ashfords

Via : Reclaim the Net

Original Story: The Gaurdian

Commercial brain-computer interface approved for human trials

Clinical trials of the first commercial Brain-Computer Interface (BCI) system, developed by Synchron, have started in the United States. If the trials are deemed successful, the system could be used everywhere, and patients with paralysis could once again be in contact with the outside world.

At the initial stage, the Stentrode system (The prototype of which was Developed in 2016 by a DARPA funded research team)is tasked with confirming its safety, as well as the ability to effectively work with digital devices without the help of hands. Synchron Inc, the company that developed the system, is thus ahead of its well-known competitor in the face of Elon Musk’s Neuralink, which receives more funding, but has not yet fully formed the staff. Last year, Neuralink raised $205 million, while Synchron only raised $70 million (Not mentioned is the Backing of DARPA to the tune of 10 million$).

When implanted, Stentrode electrodes travel through the blood vessels to the brain (in Neuralink they are implanted directly into the skull), and the system begins to translate brain activity into electrical signals, allowing you to work with text messages, email, online shopping or other relatively simple tasks.

So far, Synchron said, such projects have received only permits for short-term series of experiments in the laboratory. However, if the new series of trials is successful, the program will be expanded to allow patients to use the systems on a long-term basis. The next step on the path toward approval would be a wider trial to test for efficacy. If the trials succeed, it will likely be several years before the Stentrode is available for sale.

Nueralink nears human testing of brain chip

Neuralink, the US neurotechnology firm co-founded by billionaire entrepreneur Elon Musk, has begun recruiting key employees to run its clinical trials, signaling that it’s inching closer to starting human testing of its brain implants.

The company has posted advertisements to hire a clinical trial director and a clinical trial coordinator. The ads note that the staffers will “work closely with some of the most innovative doctors and top engineers, as well as working with Neuralink’s first clinical trial participants.” Neuralink said the director will lead and help build its clinical research team and will develop “regulatory interactions that come with a fast-paced and ever-evolving environment.” 

Neuralink has already tested its chips in the brains of a macaque monkey and a pig. The company raised eyebrows last April, when it posted footage purporting to show a monkey playing a video game with its mind.

The first human test subjects will be people with severe spinal injuries, such as quadriplegics, Musk said at the Wall Street Journal CEO Council summit. “We have a chance with Neuralink to restore full-body functionality to someone who has a spinal cord injury,” he said.

Sources : Aroged.com

Bloomberg.com

RT.com

Pre-Crime Australia: Police will use DNA sequencing to predict what suspects look like

Australian federal police have announced they are using next-generation DNA sequencing technology to predict the physical appearance of potential suspects.

Based on DNA left at a crime scene, the technology – also known as massively parallel sequencing – can predict externally visible characteristics of a person even in the absence of matching profiles in police databases.

MPS can “predict gender, biogeographical ancestry, eye colour and, in coming months, hair colour”, according to the AFP.

Experts say the technology is a “gamechanger” for forensic science but also raises issues around racial profiling, heightened surveillance and genetic privacy.

DNA forensics used to rely on a system that matched samples to ones in a criminal DNA database, and did not reveal much beyond identity. However, predictive DNA forensics can reveal things like physical appearance, biological sex and ancestry – regardless of whether people are in a database or not.

What is massive parallel sequencing?

MPS has been used commercially for more than a decade and has been used overseas in forensic cases.

Linacre describes it as a “massive gamechanger”. The technology is capable of sequencing “tens of millions of bits of DNA in one go”, he said. “This new methodology is telling you things about the person … externally visible characteristics.”

How will next-gen DNA sequencing be used in Australia?

The new sequencing technology will allow investigators to gain information about the physical characteristics of a potential suspect even when there is no matching DNA profile on a law enforcement database. The AFP plans to predict biological sex, “biogeographical ancestry”, eye colour and, in coming months, hair colour. Over the next decade they aim to include traits such as age, body mass index, and height, and even finer predictions for facial metrics such as distance between the eyes, eye, nose and ear shape, lip fullness, and cheek structure.

Pushing the Ethical boundaries?

The highly sensitive nature of DNA data, and the difficulty in ever making it anonymous creates significant privacy concerns. According to a 2020 government survey about public attitudes to privacy, most Australians are uncomfortable with the idea of their DNA data being collected.

Using DNA for forensics may also reduce public trust in the use of genomics for medical and other purposes. It will be important to set clear boundaries around what can and can’t be predicted in these tests – and when and how they will be used. Despite some progress toward a privacy impact assessment, Australian forensic legislation does not currently provide any form of comprehensive regulation of forensic DNA phenotyping.

Soruces: The Gaurdian

The Conversation

Your Face Is Now A Weapon Of War

Via :Zerohedge

Who owns your face? You might think that you do, but consider that Clearview AI, an American company that sells facial recognition technology, has amassed a database of ten billion images since 2020.

By the end of the year, it plans to have scraped 100 billion facial images from the internet. It is difficult to assess the company’s claims, but if we take Clearview AI at face value, it has enough data to identify almost everyone on earth and end privacy and anonymity everywhere.

As you read these words, your face is making money for people whom you’ve never met and who never sought your consent when they took your faceprint from your social media profiles and online photo albums. Today, Clearview AI’s technology is used by over 3,100 U.S. law enforcement agencies, as well as the U.S. Postal Service. In Ukraine, it is being used as a weapon of war. The company has offered its tools free of charge to the Ukrainian government, which is using them to identify dead and living Russian soldiers and then contact their mothers.

It would be easy to shrug this off. After all, we voluntarily surrendered our privacy the moment we began sharing photos online, and millions of us continue to use websites and apps that fail to protect our data, despite warnings from privacy campaigners and Western security services. It is tempting to overlook the fact that Ukraine is not using Clearview AI to identify dead Ukrainians, which suggests that we are witnessing the use of facial recognition technology for psychological warfare, not identification. Some people will be fine with the implications of this: if Russian mothers have to receive disturbing photos of their dead sons, so be it.

To understand why we might want to rethink the use of facial recognition technology in conflict, consider the following thought experiment: Imagine a conflict in which the United States was fighting against an opponent who had taken American faceprints to train its facial recognition technology and was using it to identify dead American soldiers and contact their mothers. This would almost certainly cause howls of protest across the United States. Technology executives would be vilified in the press and hauled before Congress, where lawmakers might finally pass a law to protect Americans’ biometric data.

We do not need to wait for these scenarios to occur; Congress could act now to protect Americans’ biometric data. If taking inspiration from the European Union (EU) General Data Protection Regulation (GDPR) seems a step too far, Congress only needs to look to Illinois, whose Biometric Information Privacy Act (BIPA) requires that companies obtain people’s opt-in consent before capturing facial images and other biometrics.

Clearview AI is currently fighting multiple lawsuits in federal and state courts in Illinois for failing to obtain users’ consent. These lawsuits highlight a troubling aspect of facial recognition technology in the United States: Americans’ privacy, civil liberties, and rights over their biometric data vary from state to state, and even within states, and are not protected by federal law

For all of Clearview AI’s many flaws, the challenge free-societies face is about more than the actions of one company. Many companies and governments are using similar means to create the same tools, such as PimEyes, FindClone, and TrueFace. Liberal democracies can regulate them, but currently, there is nothing preventing adversaries from capturing our faces and other biometric data. Failing to act could endanger soldiers, security personnel, and law enforcement officers, as well as civilian populations. It is time to confront this challenge head-on.

Original story: The National Interest

U.S and U.K. amongst 60 nations pushing for “misinformation” & “hate speech” managment (Covert Censorship)

The U.S. and 60 partner countries, including the United Kingdom, Canada, Australia, and members of the European Union (EU), have signed a sweeping “Declaration for the Future of the Internet” which commits to bolstering “resilience to disinformation and misinformation” and somehow upholding free speech rights while also censoring “harmful” content.

The White House framed the declaration as something that supports freedom and privacy by focusing on its commitments to protect human rights, the free flow of information, and privacy. The EU put out similar talking points and claimed that those who signed the declaration support a future internet that’s open, free, global, interoperable, reliable, and secure.

However, the commitments in the declaration are vague and often conflicting. For example, the declaration makes multiple commitments to upholding freedom of expression yet also commits to bolstering “resilience to disinformation and misinformation.” It also contains the seemingly contradictory commitment of ensuring “the right to freedom of expression” is protected when governments and platforms censor content that they deem to be harmful.

Furthermore, many of the governments that signed this declaration are currently pushing sweeping online censorship laws or openly supporting online censorship.

While creeping Internet monitoring and regulation may seem like a new concept to the U.S., The same process has been advanced throughout Canada & The united kingdom. Canada’s Digital Charter was launched in 2019 and threatens platforms with “meaningful financial consequences” if they fail to fight online “hate” and “disinformation.” In May of 2021 The UK launched it’s first iteration of an “online safety bill”. In short The UK’s Online Safety Bill will give the government sweeping censorship powers, censor some “legal but harmful” content, and criminalize “harmful” and “false” communications.

While the current signatories of this declaration are governments, the White House plans to work with “the private sector, international organizations, the technical community, academia and civil society, and other relevant stakeholders worldwide to promote, foster, and achieve” the “shared vision” of this Declaration for the Future of the Internet.

The declaration isn’t legally binding but is intended to be used as a “reference for public policy makers, as well as citizens, businesses, and civil society organizations.” The signatories also intend to translate its principles into “concrete policies and actions; and, work together to promote this vision globally.”

Original Story: Reclaim the Net

EU creating massive facial recognition based surveillance system

For the past 15 years, police forces searching for criminals in Europe have been able to share fingerprints, DNA data, and details of vehicle owners with each other. Now European lawmakers are set to include millions of photos of people’s faces in this system—and allow facial recognition to be used on an unprecedented scale.

The expansion of facial recognition across Europe is included in wider plans to “modernize” policing across the continent, and it comes under the Prüm II data-sharing proposals. The original Prüm Convention was signed in 2005 by Austria, Belgium, France, Germany, Luxembourg, the Netherlands, and Spain, outside of the EU’s framework – but “open” to the bloc’s other member countries, 14 out of 27 of which have since joined.

The treaty is meant to increase cross-border cooperation in tackling crime and terrorism. What this has meant so far is that the parties to the treaty have been collecting, processing, and sharing data like fingerprints, DNA, information about owners of vehicles, and the like.

What you are creating is the most extensive biometric surveillance infrastructure that I think we will ever have seen in the world,” says Ella Jakubowska, a policy adviser at the civil rights NGO European Digital Rights (EDRi).

Prüm II plans to significantly expand the amount of information that can be shared, potentially including photos and information from driving licenses. The proposals from the European Commission also say police will have greater “automated” access to information that’s shared. The massive database would then be available to police in various countries across Europe to match against photos of suspects using facial recognition algorithms, in an automated process.

Facial recognition technology has faced significant pushback in recent years as police forces have increasingly adopted it, and it has misidentified people and derailed lives. Dozens of cities in the US have gone as far as banning police forces from using the technology. The EU is debating a ban on the police use of facial recognition in public places as part of its AI Act.

The European proposals allow a nation to compare a photo against the databases of other countries and find out if there are matches—essentially creating one of the largest facial recognition systems in existence

The European data protection superviser (EDPS), who oversees how EU bodies use data under GDPR, has criticized the planned expansion of Prüm, which could take several years. “Automated searching of facial images is not limited only to serious crimes but could be carried out for the prevention, detection, and investigation of any criminal offenses, even a petty one,”

EU spokespeople claim that “a human will review potential matches,” said the report.

Sources : Reclaim the Net

Wired.com

FBI Spying On Americans Nearly Triples In 1 Year

The FBI made queries into almost 3.4 million Americans between December 2020 and November 2021, the US intelligence community admitted in an official report on Friday. The FBI said it was looking for foreign hackers, but civil libertarian groups called it an “enormous” invasion of privacy.

The FBI alone made “fewer than 3,394,053” queries of US citizens in that time period, related to information collected under the controversial authority to spy on foreigners. The findings were made public in the Annual Intelligence Community Transparency Report.

The electronic data was collected legally under Section 702 of the Foreign Intelligence Surveillance Act, the report claims. According to the ODNI, the number is due to “a number of large batch queries related to attempts to compromise U.S. critical infrastructure by foreign cyber actors” in the first half of 2021, which “included approximately 1.9 million query terms related to potential victims – including US persons.” 

This accounts for the “vast majority of the increase in US person queries conducted by FBI over the prior year,” There were fewer than 1.3 million such queries in the December 2019 to November 2020 period, according to the same findings.

The American Civil Liberties Union (ACLU) has reacted, calling the FBI’s behavior an invasion of privacy “on an enormous scale.” “Today’s report sheds light on the extent of these unconstitutional ‘backdoor searches,’ and underscores the urgency of the problem,” ACLU Senior Staff Attorney Ashley Gorski said in a statement. “It’s past time for Congress to step in to protect Americans’ Fourth Amendment rights.”

Section 702 of the FISA act allows the DNI and the US attorney general to target non-US persons located outside of the US in order to acquire foreign intelligence. 

Original story: Here

F.B.I. looks to (further) expand it’s Social media surveillance network

Data mining AI company Panamerica Computers is partnering up with the FBI to give a boost to their online surveillance capabilities. The contract is worth up to $27 million, and will provide the FBI with 5,000 licenses for one of its tools.

The licenses,give the FBI – specifically its Strategic Technology Unit of Directorate of Intelligence – the right to use a data analytics tool called Babel X, which harvests user data, including location, from the internet.

When the FBI issued a procurement call for a tool, whose purpose, boiled down, is to track a massive number of social media posts, the agency said that it must provide capability of searching multiple social media sites, in multiple languages.

As per FBI’s procurement documents, the tool had to be able to scrape data from Twitter, Facebook, Instagram, YouTube, LinkedIn, Deep/Dark Web, VK, and Telegram, while being able to do the same with Snapchat, TikTok. Reddit, 8Kun, Gab, Parler, ask.fm, Weibo, and Discord would be considered a plus.

In addition, the FBI said it would prefer more “fringe” as well as encrypted messaging platforms to be included in the winning bid. Another requirement was for the tool to carry out surveillance of these sites continuously, while the data collected would be held by the vendor and then pushed to the FBI.

Original Story via: Reclaim the Net

CDC Tracked Millions Of Americans During Lockdowns To Monitor Movement, Compliance

The Centers for Disease Control (CDC) spied on millions of Americans using cell phone location data in order to track movements and monitor whether people were complying with lockdown curfews during the pandemic.

According to CDC documents from 2021 obtained by Motherboard via a Freedom of Information Act (FOIA) request, the program tracked patterns of people visiting K-12 schools – and in one case, monitored “the effectiveness of policy in the Navajo Nation.” The documents reveal that while the CDC used the pandemic to justify purchasing the data more quickly, it actually intended to use it for general agency purposes.

The documents reveal the expansive plan the CDC had last year to use location data from a highly controversial data broker. SafeGraph, the company the CDC paid $420,000 for access to one year of data.

The data which was purchased comes from cell phones – meaning SafeGraph can track where a person lives, works, and where they’ve been, and then sell that data to various entities.

The data which the CDC bought was aggregated – which is designed to follow broad trends in how people are moving around, however researchers have raised concerns over how location data can be deanonymized to track specific individuals.

The CDC seems to have purposefully created an open-ended list of use cases, which included monitoring curfews, neighbor to neighbor visits, visits to churches, schools and pharmacies, and also a variety of analysis with this data specifically focused on ‘violence,'” said Zach Edwards, a cybersecurity researcher who closely follows the data marketplace.

As far as unmasking individuals, Edwards noted how SafeGraph’s data can be used to pinpoint certain people.

In my opinion the SafeGraph data is way beyond any safe thresholds [around anonymity],” he said, pointing to one result in SafeGraph’s user interface that showed individual movements to a specific doctor’s office – indicating how finely tuned the ‘aggregated’ data actually is.

Cell phone location data has been used throughout the pandemic for various purposes – including by media organizations reporting on how people were traveling once lockdowns began to lift.

That said, the CDC wanted the data for more than just tracking Covid-19 policy response. While the procurement documents say the data is for “an URGENT COVID-19 PR [procurement request],” one of the included use cases reads “Research points of interest for physical activity and chronic disease prevention such as visits to parks, gyms, or weight management businesses.”

The data purchased by the CDC was SafeGraph’s “U.S. Core Place Data,” “Weekly Patterns Data,” and “Neighborhood Patterns Data,” the latter of which includes information such as ‘home dwelling time’ which is aggregated by state and census block, per Motherboard.

Both SafeGraph and the CDC have previously touched on their partnership, but not in the detail that is revealed in the documents. The CDC published a study in September 2020 which looked at whether people around the country were following stay-at-home orders, which appeared to use SafeGraph data. 

Via Zerohedge

Original Story: Motherboard