2017年1月31日 星期二
php 來抓取網頁的文字內容
$city=$_GET['city'];
$city=str_replace(" ", "", $city);
// "http://www.weather-forecast.com/locations/".$city."/forecasts/latest" -> 網址
$content=file_get_contents("http://www.weather-forecast.com/locations/".$city."/forecasts/latest");
// \" -> " \/ -> / , s -> space , 正規表示法
preg_match("/<\/b><span class=\"read-more-small\"><span class=\"read-more-content\"> <span class=\"phrase\">(.*?)<\/span>/s", $content, $matches);
echo $matches[0];
2017年1月28日 星期六
2017年1月26日 星期四
商攝佈景道具
店家 安安拍照道具拍图拍摄
https://ananshenghuoguan.world.taobao.com/?spm=a312a.7700824.0.0.EKq3Uw
1. 倒影
https://world.taobao.com/item/522965437984.htm?fromSite=main&spm=a312a.7700824.w4002-15053131036.52.ta3L2d
https://world.taobao.com/item/533859785965.htm?fromSite=main&spm=a312a.7700824.w4002-15053131036.19.ec9hXC
2. 原木地板
https://world.taobao.com/item/521473561938.htm?fromSite=main&spm=a312a.7700824.w4002-15053131036.52.iufQ0l
https://world.taobao.com/item/523758856151.htm?fromSite=main&spm=a312a.7700824.w4002-15053131036.62.D6xYt7
3. 松果
https://world.taobao.com/item/35638911377.htm?fromSite=main&spm=a312a.7700824.w4002-15053131036.24.OEBVTs
必買
https://world.taobao.com/item/520874920724.htm?fromSite=main&spm=a312a.7700824.w4002-15053131036.16.OEBVTs
4.
複古風書法文字中國風名族風背景布網店產品茶具拍照拍攝背景麻布
https://world.taobao.com/item/527304499441.htm?fromSite=main&spm=a312a.7700824.w4002-15053131036.25.cqrN62
2017年1月25日 星期三
360 度網站
Ref : https://www.holobuilder.com/explore?page=details&id=6617458781716480
Ref : http://www.rwth-aachen.de/cms/root/Die-RWTH/Kontakt-Lageplaene/Raumverwaltung/~bdsd/Wildenhof/
Ref : http://www.panophoto.eu/Search/
When you hit the share button you get an iframe snippet that you just have to copy and add to your website.
Ref : http://www.rwth-aachen.de/cms/root/Die-RWTH/Kontakt-Lageplaene/Raumverwaltung/~bdsd/Wildenhof/
Ref : http://www.panophoto.eu/Search/
When you hit the share button you get an iframe snippet that you just have to copy and add to your website.
mixpanel 範例
// Mixpanel 範例例
$(document).ready(function(){
// 進入⾴頁⾯面送出資料
var pageTitle = $('title').text();
mixpanel.track('PageView',{
'pageTitle': pageTitle
});
});
// 點擊連結送出資料
$('a').click(function(event){
var link = $(this).attr('href');
var title = $(this).attr('title');
mixpanel.track('ClickLink',{
'link': link || '',
'title': title || '',
'pageTitle': pageTitle
});
});
2017年1月18日 星期三
2017年1月14日 星期六
animated css
先到 http://imakewebthings.com/waypoints/ 下載 js
然後把他放在 /js/ folder
然後再到 https://daneden.github.io/animate.css/ 下載 css
然後把他放在 /css/ folder
在code 裡面加入使用 ...
css:
<link rel="stylesheet" type="text/css" href="vendors/css/animate.css">
js:
<script src="vendors/js/jquery.waypoints.min.js"></script>
在 html5 裡面加入相對應的class name
例如: <span class="js--wp-5">
<div class="col-md-8 col-sm-8 col-xs-6 tetimonmeta">
林惠文
<span class="js--wp-5">小文</span>
</div>
然後在js裡面加入
$(document).ready(function(){
$('.js--wp-5').waypoint(function(direction){
$('.js--wp-5').addClass('animated zoomInDown');
}, {
offset:'50%'
});
});
要甚麼效果可以自己去選
例如 : 要 bounceIn 就打 bounceIn .. 大小寫要一模一樣
2017年1月13日 星期五
face detection in web
Ref : https://code.tutsplus.com/tutorials/how-to-create-a-face-detection-app-with-react-native--cms-26491
1. What Is the Face Detection API?
Before we start writing our app, I would like to take a moment to talk about the API we will be using for face detection. Microsoft's face detection API provides face detection and face recognition functionality via a cloud-based API. This allows us to send an HTTP request containing either an image or a URL of an existing image on the web, and receive data about any faces detected in the image.
Sending Requests to the API
You can make requests to Microsoft's face detection API by sending a POST request to https://api.projectoxford.ai/face/v1.0/detect. The request should contain the following header information:
- Content-Type: This header field contains the data type of the request body. If you are sending the URL of an image on the web, then the value of this header field should be application/json. If you are sending an image, set the header field to application/octet-stream.
- Ocp-Apim-Subscription-Key: This header field contains the API key used for authenticating your requests. I will show you how to obtain an API key later in this tutorial.
By default, the API only returns data about the boxes that are used to enclose the detected faces in the image. In the rest of this tutorial, I will refer to these boxes as face boxes. This option can be disabled by setting the
returnFaceRectangle
query parameter to false
. The default value is true
, which means that you don't have to specify it unless you want to disable this option.
You can supply a few other optional query parameters to fetch additional information about the detected faces:
returnFaceId
: If set totrue
, this option assigns a unique identifier to each of the detected faces.returnFaceLandmarks
: By enabling this option, the API returns an array of face landmarks of the detected faces, including eyes, nose, and lips. This option is disabled by default.returnFaceAttributes
: If this option is enabled, the API looks for and returns unique attributes for each of the detected faces. You need to supply a comma-separated list of the attributes that you are interested in, such as age, gender, smile, facial hair, head pose, and glasses.
Below is a sample response that you get from the API given the following request URL:
1
| https://api.projectoxford.ai/face/v1.0/detect?faceId=true&faceLandmarks=true&faceAttributes=age,gender,smile,facialHair,headPose,glasses |
001
002
003
004
005
006
007
008
009
010
011
012
013
014
015
016
017
018
019
020
021
022
023
024
025
026
027
028
029
030
031
032
033
034
035
036
037
038
039
040
041
042
043
044
045
046
047
048
049
050
051
052
053
054
055
056
057
058
059
060
061
062
063
064
065
066
067
068
069
070
071
072
073
074
075
076
077
078
079
080
081
082
083
084
085
086
087
088
089
090
091
092
093
094
095
096
097
098
099
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
| [ { "faceId" : "c5c24a82-6845-4031-9d5d-978df9175426" , "faceRectangle" : { "width" : 78, "height" : 78, "left" : 394, "top" : 54 }, "faceLandmarks" : { "pupilLeft" : { "x" : 412.7, "y" : 78.4 }, "pupilRight" : { "x" : 446.8, "y" : 74.2 }, "noseTip" : { "x" : 437.7, "y" : 92.4 }, "mouthLeft" : { "x" : 417.8, "y" : 114.4 }, "mouthRight" : { "x" : 451.3, "y" : 109.3 }, "eyebrowLeftOuter" : { "x" : 397.9, "y" : 78.5 }, "eyebrowLeftInner" : { "x" : 425.4, "y" : 70.5 }, "eyeLeftOuter" : { "x" : 406.7, "y" : 80.6 }, "eyeLeftTop" : { "x" : 412.2, "y" : 76.2 }, "eyeLeftBottom" : { "x" : 413.0, "y" : 80.1 }, "eyeLeftInner" : { "x" : 418.9, "y" : 78.0 }, "eyebrowRightInner" : { "x" : 4.8, "y" : 69.7 }, "eyebrowRightOuter" : { "x" : 5.5, "y" : 68.5 }, "eyeRightInner" : { "x" : 441.5, "y" : 75.0 }, "eyeRightTop" : { "x" : 446.4, "y" : 71.7 }, "eyeRightBottom" : { "x" : 447.0, "y" : 75.3 }, "eyeRightOuter" : { "x" : 451.7, "y" : 73.4 }, "noseRootLeft" : { "x" : 428.0, "y" : 77.1 }, "noseRootRight" : { "x" : 435.8, "y" : 75.6 }, "noseLeftAlarTop" : { "x" : 428.3, "y" : 89.7 }, "noseRightAlarTop" : { "x" : 442.2, "y" : 87.0 }, "noseLeftAlarOutTip" : { "x" : 424.3, "y" : 96.4 }, "noseRightAlarOutTip" : { "x" : 446.6, "y" : 92.5 }, "upperLipTop" : { "x" : 437.6, "y" : 105.9 }, "upperLipBottom" : { "x" : 437.6, "y" : 108.2 }, "underLipTop" : { "x" : 436.8, "y" : 111.4 }, "underLipBottom" : { "x" : 437.3, "y" : 114.5 } }, "faceAttributes" : { "age" : 71.0, "gender" : "male" , "smile" : 0.88, "facialHair" : { "mustache" : 0.8, "beard" : 0.1, "sideburns" : 0.02 } }, "glasses" : "sunglasses" , "headPose" : { "roll" : 2.1, "yaw" : 3, "pitch" : 0 } } ] |
This sample response is pretty self-explanatory so I am not going to dive deeper into what each attribute stands for. The data can be used to show the detected faces, their different attributes, and how you can show them to the user. This is achieved by interpreting the x and y coordinates or the top and left positioning.
Acquiring an API Key
To use Microsoft's face detection API, each request needs to be authenticated with an API key. Here are the steps you need to take to acquire such a key.
Create a Microsoft Live account if you don't already have one. Sign in with your Microsoft Live account and sign up for a Microsoft Azure Account. If you don't have a Microsoft Azure account yet, then you can sign up for a free trial, giving you access to Microsoft's services for 30 days.
For the face detection API, this allows you to send up to twenty API calls per minute for free. If you already have an Azure account, then you can subscribe to the Pay-As-You-Go plan so you only pay for what you use.
Once your Microsoft Azure account is set up, you are redirected to the Microsoft Azure Portal. In the portal, navigate to the search bar and enter cognitive services in the search field. Click the result that says Cognitive Services accounts (preview). You should see an interface similar to the following:
訂閱:
文章 (Atom)