I had noticed the photos taken by my camera is getting blurer and blurer, its caused by the camera lens protector. After I removed it, the photos are back to clear. And less deterioration now. So, I can be sure its the protector’s fault.
I’m not going to say the brand of the camera protector. Ironically, the thing that is going to protect my camera from getting scratches caused the trouble.
I use the Python programming language version 3.6.0. Install packages such as BeautifulSoup4 for parsing HTML and Selenium Python for web browser automation. If you are not familiar with Python, BeautifulSoup, or Selenium or any part of this, search Google for “How to run Python program”, “Selenium web driver”, etc.
The code to parse MIT OCW website (https://ocw.mit.edu/courses/mathematics, you can edit this URL) for ISBN is as follows. The script will check for course pages that have “Syllabus” and “Readings” tab.
# exclude those isbn10 that is actually the substring of isbn13
# remove duplicates
You will get a list of ISBNs (both ISBN10 and ISBN13) printed to your console, e.g.:
Copy and paste the list of ISBNs into a isbn.txt file.
Then, run another script as below to add books to Goodreads list based on ISBN. This script uses selenium, download Selenium Python package and Edge/ Firefox/ Chrome browser driver to the same folder of this script, I am using Edge. Edit the list URL (https://www.goodreads.com/list/show/108086.MIT_Mathematics_syllabus_books) to your list URL.
Script to auto add books to Goodreads list using Python and Selenium
Every time you want to post a new post, you will see a checkbox. Tick, if you don’t want to email your subscribers of your new post.
Don’t send email to subscribers checkbox
Edit: As of WordPress 5.0 (Dec, 2018), this no longer works. Wait for the issue updates at this Github issue page.
Temporary solution: Go to Jetpack > Settings > Discussion, disable ‘Allow users to subscribe to your posts and comments and receive notifications via email’, publish your post, then enable back. It will not send email for the post.
Artificial Intelligence: A Modern Approach (3rd Edition) by Peter Norvig and Stuart J. Russell is the standard textbook of the artificial intelligence field. It is featured in many university course reading list.
Artificial Intelligence: A Modern Approach book cover. Source: teky.me
This series “Reading Artificial Intelligence: A Modern Approach”, I am going to post what I learnt from reading this book, as well as some codes.
This is a dense book, with 1000+ pages, I have been trying to finish it a few times but do not succeed, but the more I visit, the more I become familiar with it, hopefully I can finish it this time.
There are bibliographies, summaries, and exercises at the end of each chapter. It is good for you if you want to give a run through after you finish the book as well as test your understanding.
Chapter 1: Introduction
There are 4 ways of approaching AI – thinking/ acting, humanly/ rationally. They are thinking humanly (e.g. Cognitive science), thinking rationally (e.g. Logic), acting humanly (e.g. Turing Test), and acting rationally (e.g. Rational agent). The first dimension, thinking or acting deals with thinking processes or behavior, while humanly or rationally means humanly or does the right thing. The book’s approach to AI is acting rationally, and seek to build rational agents. Agents are robots or software.
Then, the chapter introduces the different fields that contributed to AI – philosophy (dualism, rationalism, materialism, empiricism, induction, logical positivism, observation sentences, confirmation theory), mathematics (logic, computation, probability, algorithm, incompleteness theorem, tractability, NP-completeness), economics (utility, decision theory, game theory, operations research, Markov decision processes, satisficing), neuroscience (neuron, singularity), psychology (behaviorism, cognitive psychology), computer engineering (efficient computer, computer as artifact, programmable computer), control theory and cybernetics (homeostatic, maximizing objective function), linguistic (computational linguistic, natural language processing, knowledge representation).
Next, the history of AI.
The first period is the gestation of artificial intelligence (1943-1955).
First work on AI by Warren McCulloch and Walter Pitts (1943) – artificial neuron model where each neuron is either “on” or “off”.
Donald Hebb introduces Hebbian learning to update connection strengths between artificial neuron (1949).
Marvin Minsky and Dean Edmonds built the first neural network computer (SNARC) with 3000 vacuum tubes and 40 neurons (1950).
Marvin Minsky proved the limitations of neural network.
Alan Turing introduces the Turing Test (1950).
The birth of artificial intelligence (1956)
Two-month AI workshop at Darmouth in the summer (1956) by John McCarthy. The workshop introduced key players in AI to each other.
In the workshop, Allen Newell and Herbert Simon introduces the reasoning program called Logic Theorist (LT). It can prove most of the mathematical theorem of Chapter 2 of Russell and Whitehead’s Principia Mathematica and came up with a shorter proof for one theorem.
Early enthusiasm, great expectations (1952-1969)
Allen Newell and Herbert Simon introduced the General Problem Solver (GPS) that approaches problem with subgoals and possible actions, similar to human, embodying the “thinking humanly” approach.
Allen Newell and Herbert Simon formulated the physical system hypothesis (1976) which states that “a physical symbol system has the necessary and
sufficient means for general intelligent action.”
Herbert Gelernter introduced the Geometry Theorem Prove. It can prove mathematical theorem many mathematics students find tricky.
Arthur Samuel wrote a series of programs for checkers that can play at strong amateur level, disporved the idea that computers can only do what it is programmed to do, when it played better than its creator.
John Mccarthy defined the language LISP (1958), invented time sharing (1958), published “Programs with Common Sense” which introduced the Advice Taker (1958).
J. A. Robinson discoverd the resolution method (a complete theorem-proving algorithm for first-order logic) (1965)
Cordell Green’s question-answering and planning system (1969) uses logic.
Shakey robotics project at Stanford Research Institute (SRI) integrates logic and physical activity.
Minsky and students work on solving limited problems called microworlds.
James Slagle’s SAINT program (1963) solved closed-form calculus integration problems typical of first-year college course.
Tom Evan’s ANALOGY program (1968) solved geometric analogy problems that appeared in IQ tests.
Daniel Bobrow’s STUDENT program (1967) solved algebra story problems.
Daniel Huffman’s vision project (1971), David Waltz’s vision and constraint-propagation work, Patrick Winston’s learning theory (1970), Terry Winograd’s natural language understanding program (1972), Scott Fahlman’s planner (1974).
Terry Winograd abd Cowan showed large number of elements could represent an individual concept (1963).
Bernie Widrow enhances Hebb’s learning methods and called his networks adalines (1960, 1962).
Frank Rosenblatt did the same with his perceptrons (1962).
Perceptron convergence theorem introduced (1962).
A dose of reality (1966-1973)
A report by an advisory committee found that “there has been no machine translation of general scientific text, and none is in immediate prospect.” All U.S. government funding for translation was cancelled.
The realisation of intractability of many AI problems e.g. genetic algorithms.
British government ends support in AI except two universities.
Marvin Minsky proved that two-input perceptrons cannot recognize when two inputs are different. Research funding for neural net ends.
Knowledge-based systems: The key to power? (1969-1979)
Domain specific systems/ expert systems.
Ed Feigenbaum, Bruce Buchanan, and Joshua Lederberg introduced DENDRAL program (1969) that can infer molecular structure from mass spectrometer data.
Ed Feigenbaum and others began the Heuristic Programming Project (HPP) to investigate the extend of expert systems.
Ed Feigenbaum, Bruce Buchanan, and Dr. Edward Shortliffe introduced MYCIN to diagnose blood infection using 450 rules acquired from experts. It incorporates a calculus of uncertainty called certainty factors.
Winograd’s SHRDLU to understand natural language.
Schank and students developed programs to understand natural language.
AI becomes an industry (1980-present)
First successful commercial expert system, R1 at Digital Equipment Corporation (1982).
Nearly every major U.S. corporation has its own AI group or investigating expert systems.
The Japanese announced the “Fifth Generation” project (1981), a 10 year program to build intelligence computers using Prolog.
United States formed the Microelectronics and Computer Technology Corporation (MCC).
Britain’s Alvey report reinstated funding cut by Lighthill report.
AI industry boomed from a few million dollars in 1980 to billions of dollars in 1988).
Then came “AI Winter”, failure to meet extravagant promises.
The return of neural networks (1986-present)
Back-propagation reinvented (mid-1980s)
AI adopts the scientific method (1987-present)
Build on existing theories rather than propose new ones.
Base claims on rigorous theorems and hard experimental evidences rather than intuition.
Work on real-world applications rather than toy examples.
Replicate experiments using shared repositories of test data and code.
Evident in speech recognition, e.g. hidden Markov models (HMM) – based on rigorous mathematical theory, training on large corpus.
Machine translation, neural networks, data mining, robotics, computer vision, and knowledge representation.
Bayesian network introduced.
The emergence of intelligent agents (1995-present)
Trying to build “whole agent” or human-level AI (HLAI) or Artificial General Intelligence (AGI).
Allen Newell, John Laird, and Paul Rosenbloom introuced SOAR (1987, 1990), a complete agent architecture.
Web-based applications e.g. search engines, recommender systems, website aggregators in internet, “-bot” suffix in everyday language.
Realization that isolated subfields of AI need to be reorganized, sensors need to handle uncertainty.
The availability of very large data sets (2001-present)
Increasing availability of very large data sources, e.g. trillions of works of English, billions of images from the Web, billions of base pairs of genomic sequences.
A mediocre algorithm with large data sets is better than a best algorithm with small data sets.
Lastly, the chapter gives the state of the art of AI. State of the art AI includes robotics vehicles (STANLEY, BOSS), speech recognition (airline booking), autonomous planning and scheduling (NASA’s remote agent, MAPGEN, MEXAR2), game playing (IBM’s DEEP BLUE), spam fighting, logistic planning (DART), robotics (Roomba, PackBot), and machine translation.
However, you can turn off Beautiful Math of Jetpack and use WP QuickLaTeX instead. The font of WP QuickLaTeX can be adjusted.
After installing WP QuickLaTeX, set it to work site wide, so that you don’t have to put
[latexpage] everytime you want to use LaTeX. You can now input latex inline using
$..$ and standalone using
Also, switch off Photon of Jetpack, as WP QuickLaTeX serves the LaTeX in SVG causing the Photon image delivery to not work. Or, if you want to keep Photon, set WP QuickLaTeX to output in GIF or PNG instead.
“if the history of machine learning were a Hollywood movie, the villain would be Marvin Minsky.”
– Pedro Domingos
Last year, Google Deepmind published in Nature journal that its program, AlphaGo has beaten a 2-dan professional Go player five out of five matches. This is an exciting news because in artificial intelligence (AI) field, Go game is a difficult game to program a computer to play because it has so many moves and it is not possible to use brute force.
Within week apart, we also had another big news in AI, that is the death of Marvin Minsky, a pioneer of Artificial Intelligence (AI) field. I had read Pedro Domingos’ book, The Master Algorithm before this. In the book, Pedro Domingos detailed how Marvin Minsky does not believe in neural networks, and with the publication of the paper perceptron, it killed the field neural network. Neural network, which is used in deep learning is used in creating AlphaGo, so the win is a boost for neural network. However, AlphaGo does not use pure deep learning, it uses a couple of other techniques such as Monte Carlo, etc.
Whatever it is, we are on a mission to find out how intelligence works.
You might try the Site Backup Pro of Bluehost but it is very inconvenient, sometimes it does not work, it says “It appears that your firewall or browser may be blocking the necessary login ports”. I use other way (manual way).
Two things you want to backup are the website files and databases.
To backup website files, create a zip file of all the website files located in /var/www or public_html folder through the File Manager of Bluehost. Then, use Filezilla to login to the server. Then, proceed to download the zipped website files.
There are many types of databases, but the most common is the MySQL. Login to your phpMyAdmin, export, and you will get the sql file containing all the databases.
How to restore?
Just copy and paste the website files into your new web server via Filezilla. For databases, just import it via phpMyAdmin or whatever database software you are using.
For WordPress admins
This is for WordPress people, using the most common type of blog on the internet. You may need to edit the wp-config to your new database setting. If not, you will get the “Error Establishing A Database Connection” error.
The above is a very easy, manual method, just copy everything over. This will work for all the website files, not just WordPress. However, if you want to just backup WordPress, you may try plugins, or the above will work just fine.
First install Python 2 and Python 3 in your computer. Then, install virtualenv and virtualenvwrapper so that each django project that we create will have different environment from each other, e.g. python version.
pip install virtualenv
pip install virtualenvwrapper
Then, make a virtual environment with your desired python version. Use workon and deactivate to work on and deactivate the virtual environment. Now, we want to work on virtual environment. Deactivate the virtual environment after finishing the tutorial.