Earprint touchscreen sensoring comparison between hand-crafted features and transfer learning for smartphone authentication

Jose-Luis Cabra1+, Carlos Parra2, and Luis Trujillo2

 

1Fundacion Universitaria Compensar, Avenida (Calle) 32 No. 17-30. Bogota, 111311, Colombia
jlcabra@ucompensar.edu.co

2Pontificia Universidad Javeriana, Ak. 7 #40 - 62. Bogota, 110231, Colombia
{carlos.parra, trujillo.luis}@javeriana.edu.co

 

Abstract

The smartphone's lock screen is at a threshold between usability and comfort. For example, some smartphone users prefer not to use the sliding or acceptance call button, but a more secure and ef[1]ficient way of picking up the phone instead. Others prefer the smoothest interaction possible with their devices for getting quick access to smartphone services. In this paper, from a smartphone au[1]thentication point of view, we propose using the touchscreen as an ear shape detector. This approach helps verify the right user for incoming calls, supporting user privacy, as well as avoiding any action approval through a button. In a one-against-all authentication scheme, looking for the best discrim[1]ination model, genuine and impostor data are evaluated with two different authentication engines: (i.) Transfer Learning (ii.) Different classifiers are fed by fused hand-crafted features like LBP, HoG, and LIOP. Previous to both authentication approaches execution, the ear shape is extracted by an own heuristic architecture to remove skin-related noises and highlight the region of interest. The classifier results of this paper confirm that Earprint guarantees user verification, reaching an accuracy of 97.7.

Keywords: Smartphone Authentication, Earprint, Capacitive images, Machine Learning, Touchscreen

 

+: Corresponding author: Jose-Luis Cabra
Department of Telecommunication Engineering, Fundacion Universitaria Compensar, Avenida ´ (Calle) 32 No. 17 – 30. Bogota, 111311, Colombia. Tel: +57-311-822-86-36

 

Journal of Internet Services and Information Security (JISIS), 12(3): 16-29, August 2022
Received: May 18, 2022; Accepted: July 27, 2022; Published: August 31, 2022

DOI: 10.22667/JISIS.2022.08.31.016 [pdf]