2017年1月13日 星期五

face detection in web

Ref :  https://code.tutsplus.com/tutorials/how-to-create-a-face-detection-app-with-react-native--cms-26491


Before we start writing our app, I would like to take a moment to talk about the API we will be using for face detection. Microsoft's face detection API provides face detection and face recognition functionality via a cloud-based API. This allows us to send an HTTP request containing either an image or a URL of an existing image on the web, and receive data about any faces detected in the image.
You can make requests to Microsoft's face detection API by sending a POST request to https://api.projectoxford.ai/face/v1.0/detect. The request should contain the following header information:
  • Content-Type: This header field contains the data type of the request body. If you are sending the URL of an image on the web, then the value of this header field should be application/json. If you are sending an image, set the header field to application/octet-stream.
  • Ocp-Apim-Subscription-Key: This header field contains the API key used for authenticating your requests. I will show you how to obtain an API key later in this tutorial.
By default, the API only returns data about the boxes that are used to enclose the detected faces in the image. In the rest of this tutorial, I will refer to these boxes as face boxes. This option can be disabled by setting the returnFaceRectangle query parameter to false. The default value is true, which means that you don't have to specify it unless you want to disable this option.

You can supply a few other optional query parameters to fetch additional information about the detected faces:
  • returnFaceId: If set to true, this option assigns a unique identifier to each of the detected faces.
  • returnFaceLandmarks: By enabling this option, the API returns an array of face landmarks of the detected faces, including eyes, nose, and lips. This option is disabled by default.
  • returnFaceAttributes: If this option is enabled, the API looks for and returns unique attributes for each of the detected faces. You need to supply a comma-separated list of the attributes that you are interested in, such as age, gender, smile, facial hair, head pose, and glasses.
Below is a sample response that you get from the API given the following request URL:
This sample response is pretty self-explanatory so I am not going to dive deeper into what each attribute stands for. The data can be used to show the detected faces, their different attributes, and how you can show them to the user. This is achieved by interpreting the x and y coordinates or the top and left positioning.
To use Microsoft's face detection API, each request needs to be authenticated with an API key. Here are the steps you need to take to acquire such a key.
Create a Microsoft Live account if you don't already have one. Sign in with your Microsoft Live account and sign up for a Microsoft Azure Account. If you don't have a Microsoft Azure account yet, then you can sign up for a free trial, giving you access to Microsoft's services for 30 days.
For the face detection API, this allows you to send up to twenty API calls per minute for free. If you already have an Azure account, then you can subscribe to the Pay-As-You-Go plan so you only pay for what you use.
Once your Microsoft Azure account is set up, you are redirected to the Microsoft Azure Portal. In the portal, navigate to the search bar and enter cognitive services in the search field. Click the result that says Cognitive Services accounts (preview). You should see an interface similar to the following:






沒有留言:

張貼留言