Difference between revisions of "EGR 103/Concept List Fall 2019"

From PrattWiki
Jump to navigation Jump to search
(Lecture 17 - Statistics and Curve Fits)
Line 401: Line 401:
 
** $$r^2=\frac{S_t-S_r}{S_t}=1-\frac{S_r}{S_t}$$ is the coefficient of determination; it is a normalized value that gives information about how well a model predicts the data.  An $$r^2$$ value of 1 means the model perfectly predicted every value in the data set.  A value of 0 means the model does as well as having picked the average.  A negative value means the model is '''worse''' than merely picking the average.
 
** $$r^2=\frac{S_t-S_r}{S_t}=1-\frac{S_r}{S_t}$$ is the coefficient of determination; it is a normalized value that gives information about how well a model predicts the data.  An $$r^2$$ value of 1 means the model perfectly predicted every value in the data set.  A value of 0 means the model does as well as having picked the average.  A negative value means the model is '''worse''' than merely picking the average.
  
 +
== Lecture 18 - More statistics and curve fitting ==
 +
* Mathematical proof of solution to [[General Linear Regression]]
 +
* [[Python:Fitting]]
  
<!--
+
== Lecture 19 - 3D Plotting ==
 
+
* [[Python:Plotting Surfaces]]
 
 
== Lecture 11 - Style, Strings and Files ==
 
* Discussion of PEP and PEP8 in particular
 
* Installation and use of autopep8 to make .py files stylistically correct
 
** autopep8 FILE.py --aggressive
 
** Putting a -i will actually replace FILE.py with better version rather than just showing issues
 
 
 
== Lecture 12 - Monte Carlo Methods ==
 
* Using repetition to approximate statistics
 
*
 
<div class="mw-collapsible mw-collapsed">
 
<source lang=python>
 
# Random walk simulator:
 
</source>
 
<div class="mw-collapsible-content">
 
<source lang=python>
 
import numpy as np
 
import matplotlib.pyplot as plt
 
import math as m
 
 
 
def start_fig(fnum=1):
 
    fig, ax = plt.subplots(num=fnum)
 
    fig.clf()
 
    fig, ax = plt.subplots(num=fnum)
 
    return fig, ax
 
 
 
def take_step():
 
    return 2*np.random.randint(0, 2)-1
 
 
 
def take_walk(steps):
 
    loc = np.zeros(steps+1)
 
    for k in range(1,steps+1):
 
        loc[k] = loc[k-1]+take_step()
 
 
 
    return loc
 
 
 
 
 
if __name__ == "__main__":
 
    num_pos=0
 
    if 0:
 
        for k in range(1000000):
 
            x = take_step()
 
            if x == 1:
 
                num_pos+=1
 
 
 
        print(num_pos)
 
 
 
    if 0:
 
        print(take_walk(6))
 
 
 
    num_walkers = 640000
 
    end_loc = np.zeros(num_walkers)
 
    for k in range(num_walkers):
 
        w = take_walk(6)
 
        end_loc[k] = w[-1]
 
 
 
    fig, ax = start_fig(1)
 
    ax.hist(end_loc, bins=13, range=(-6.5, 6.5))
 
 
 
</source>
 
</div>
 
</div>
 
 
 
== Lecture 13 - Linear Algebra I ==
 
* 1-D and 2-D Arrays
 
* Matrix multiplication (using @ in Python)
 
* Setting up linear algebra equations
 
* Determinants of matrices and the meaning when the determinant is 0
 
* Inverses of matrices
 
* Solving systems of equations
 
 
 
== Lecture 14 - Linear Algebra II ==
 
* Norms
 
** p-norm for 1-D arrays -- mainly 1, 2, or infinity
 
** 1, Frobenius, infinity norms for 2-D arrays
 
** 2 norm for 2-D arrays -- harder to calculate but most used -- VERY different way of finding from 2-norm of a 1-D array
 
* Condition numbers
 
* log10(condition number) = number of digits of precision lost due to system geometry
 
 
 
== Lecture 15 - Test Review ==
 
 
 
== Lecture 16 - Test ==
 
 
 
== Lecture 17 - Statistics and Curve Fitting 1 ==
 
 
 
== Lecture 18 - Statistics and Curve Fitting 2 ==
 
  
== Lecture 19 - Statistics and Curve Fitting 2.5 ==
+
== Lecture 20 - Roots of Equations ==
 
 
== Lecture 20 - Roots and Extrema ==
 
 
* [[Python:Finding roots]]
 
* [[Python:Finding roots]]
 
* SciPy references (all from [https://docs.scipy.org/doc/scipy/reference/optimize.html Optimization and root finding]):
 
* SciPy references (all from [https://docs.scipy.org/doc/scipy/reference/optimize.html Optimization and root finding]):
 
** [https://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.brentq.html scipy.optimize.brentq] - closed method root finding
 
** [https://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.brentq.html scipy.optimize.brentq] - closed method root finding
 
** [https://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.fsolve.html scipy.optimize.fsolve] - open method root finding
 
** [https://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.fsolve.html scipy.optimize.fsolve] - open method root finding
 +
 +
== Lecture 21 - Roots and Extrema ==
 +
* SciPy references (all from [https://docs.scipy.org/doc/scipy/reference/optimize.html Optimization and root finding]):
 
** [https://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.fmin.html scipy.optimize.fmin] - unbounded minimization
 
** [https://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.fmin.html scipy.optimize.fmin] - unbounded minimization
 
** [https://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.fminbound.html scipy.optimize.fminbound] - bounded minimization
 
** [https://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.fminbound.html scipy.optimize.fminbound] - bounded minimization
 
 
 
 
 
  
  
 
<!--
 
<!--
=== Lecture 3 ===
 
* 7 Steps for finding prime numbers
 
* prime program -- includes intro to input(), if tree, for loop, print(), remainder %
 
=== Lecture 4===
 
* Function definitions
 
** Positional and key word arguments (kwargs)
 
** Default values
 
** Returns tuples -- can be received by a single item or a tuple of the right size
 
* Aquarium
 
=== Lecture 5 ===
 
* print() and format specifications: [https://docs.python.org/3/library/string.html#formatspec link]
 
** Main components are width, precision, and type; sometimes initial +
 
** e and f can print integers as floats; d '''cannot''' print floats
 
* relational and logical operators - how they work on item, string, list, tuple
 
* if trees
 
* while loops
 
* for loops
 
** NOTE: in the case of
 
for y in x
 
:: If the ''entries'' of x are changed, or really if x is changed in a way that its location in memory remained '''unchanged,''' y will iterate over the changed entries.  If x is changed so that a copy first has to be made (for example, it is set equal to a slice of itself), then y will iterate over the original entries.  Note the differences between:
 
<source lang=python>
 
x = [1, 2, 3, 4, 5]
 
for y in x:
 
    print(x, y)
 
    x[4] = [0]
 
    print(x, y)
 
</source>
 
and:
 
<source lang=python>
 
x = [1, 2, 3, 4, 5]
 
for y in x:
 
    print(x, y)
 
    x = x[:-1]
 
    print(x, y)
 
</source>
 
 
* counting characters program
 
<div class="mw-collapsible mw-collapsed">
 
<source lang=python>
 
# letter_typing.py from class:
 
</source>
 
<div class="mw-collapsible-content">
 
<source lang=python>
 
def check_letters(phrase):
 
    vowels = "aeiou"
 
    numbers = "0123456789"
 
    consonants = "bcdfghjklmnpqrstvwxyz"
 
    # vowels, numbers, consonants, and other in that order
 
    count = [0, 0, 0, 0]
 
   
 
    for letter in phrase:
 
        if letter.lower() in vowels:
 
            count[0] += 1
 
        elif letter.lower() in numbers: # .lower not really needed here
 
            count[1] += 1
 
        elif letter.lower() in consonants:
 
            count[2] += 1
 
        else:
 
            count[3] += 1
 
           
 
    return count
 
 
out = check_letters("Let's go Duke University 2018!")
 
print(out)
 
</source>
 
</div>
 
</div>
 
* Question in class: does Python have ++ or -- operators; it does not.  You need x += 1 or x -= 1
 
=== Lecture 6 ===
 
Florence - material moved to Lecture 7 and Lab 4
 
 
===Lecture 7 ===
 
* Distinction between == and i
 
* Iterable types and how they work (list, tuple, string
 
* Things that do not change memory address of a list (and therefore change what a loop iterates over):
 
** +, .append(), .extend(), .insert(), .remove(), .pop(), .sort(), .reverse(), .clear()
 
* Things that do change memory address of a list:
 
** Total replacement, replacement by self-slice, making a copy
 
* Singularity functions
 
** Unit step <math>u(t)</math>
 
** See [[Singularity Functions]] for way too much information - more in lab!
 
*** [[Singularity_Functions#Alternate_Names_for]], [[Singularity_Functions#Building_a_Mystery_Signal]], and [[Singularity_Functions#Accumulated_Differences]] might be particularly helpful!
 
 
=== Lecture 8 ===
 
* Be sure to use the resources available to you, especially the [http://greenteapress.com/thinkpython2/html/index.html Think Python] links in the [http://classes.pratt.duke.edu/EGR103F18/schedules.html Class Schedule]!
 
* When checking for valid inputs, sometimes the order of asking the question can be important.  If you want a single non-negative integer, you first need to ask if you have an integer before you ask if something is non-negative.  The right way to go is something like:
 
<source lang=python>
 
if not isinstance(n, int) or n<1:
 
</source>
 
because if n is not an integer, the logic will be true without having to ask the second question.  This is a good thing, because if n is, for example, a list, asking the question n<1 would yield an error. 
 
* We built up a program whose end goal was to compare and contrast the time it takes different methods of calculating Fibonacci numbers to run.
 
* The [https://matplotlib.org/api/_as_gen/matplotlib.pyplot.plot.html plt.plot(x, y)] command has some near cousins:
 
** [https://matplotlib.org/api/_as_gen/matplotlib.pyplot.semilogy.html plt.semilogy(x, y)] will plot with a regular scale for x and a base-10 log scale for y
 
** [https://matplotlib.org/api/_as_gen/matplotlib.pyplot.semilogx.html plt.semilogx(x, y)] will plot with a base-10 log scale for x and a regular scale for y
 
** [https://matplotlib.org/api/_as_gen/matplotlib.pyplot.loglog.html plt.loglog(x, y)] will plot with base-10 log scales in both directions
 
* Fibonacci comparison program
 
<div class="mw-collapsible mw-collapsed">
 
<source lang=python>
 
# Fibonacci program from class
 
</source>
 
<div class="mw-collapsible-content">
 
<source lang=python>
 
# -*- coding: utf-8 -*-
 
"""
 
Created on Fri Sep 21
 
 
@author: DukeEgr93
 
"""
 
 
import time
 
import numpy as np
 
import matplotlib.pyplot as plt
 
 
 
def fib(n):
 
    if not isinstance(n, int) or n < 1:
 
        print('Invalid input!')
 
        return None
 
    if n == 1 or n == 2:
 
        return 1
 
    else:
 
        return fib(n - 1) + fib(n - 2)
 
 
 
"""
 
fib_dewey based on fibonacci at:
 
http://greenteapress.com/thinkpython2/html/thinkpython2012.html#sec135
 
"""
 
known = {1: 1, 2: 1}
 
 
 
def fib_dewey(n):
 
    if not isinstance(n, int) or n < 1:
 
        print('Invalid input!')
 
        return None
 
    if n in known:
 
        return known[n]
 
    res = fib_dewey(n - 1) + fib_dewey(n - 2)
 
    known[n] = res
 
    return res
 
 
 
def fibloop(n):
 
    if not isinstance(n, int) or n < 1:
 
        print('Invalid input!')
 
        return None
 
    head = 1
 
    tail = 1
 
    for k in range(3, n + 1):
 
        head, tail = head + tail, head
 
    return head
 
 
 
N = 30
 
uvals = [0] * N
 
utimes = [0] * N
 
 
mvals = [0] * N
 
mtimes = [0] * N
 
 
lvals = [0] * N
 
ltimes = [0] * N
 
 
for k in range(1, N + 1):
 
    temp_time = time.clock()
 
    uvals[k - 1] = fib(k)
 
    utimes[k - 1] = time.clock() - temp_time
 
 
    temp_time = time.clock()
 
    mvals[k - 1] = fib_dewey(k)
 
    mtimes[k - 1] = time.clock() - temp_time
 
 
    temp_time = time.clock()
 
    lvals[k - 1] = fibloop(k)
 
    ltimes[k - 1] = time.clock() - temp_time
 
 
for k in range(1, N + 1):
 
    print('{:2d} {:6d} {:0.2e}'.format(k, uvals[k - 1], utimes[k - 1]), end='')
 
    print(' {:6d} {:0.2e}'.format(mvals[k - 1], mtimes[k - 1]), end='')
 
    print(' {:6d} {:0.2e}'.format(lvals[k - 1], ltimes[k - 1]))
 
 
k = np.arange(1, N + 1)
 
plt.figure(1)
 
plt.clf()
 
plt.plot(k, utimes, k, mtimes, k, ltimes)
 
plt.legend(['recursive', 'memoized', 'looped'])
 
 
plt.figure(2)
 
plt.clf()
 
plt.semilogy(k, utimes, k, mtimes, k, ltimes)
 
plt.legend(['recursive', 'memoized', 'looped'])
 
</source>
 
</div>
 
</div>
 
 
=== Lecture 9 ===
 
* Different number systems convey information in different ways.
 
* "One billion dollars!" may not mean the same thing to different people: [https://en.wikipedia.org/wiki/Long_and_short_scales Long and Short Scales]
 
* Floats (specifically double precision floats) are stored with a sign bit, 52 fractional bits, and 11 exponent bits.  The exponent bits form a code:
 
** 0 (or 00000000000): the number is either 0 or a denormal
 
** 2047 (or 11111111111): the number is either infinite or not-a-number
 
** Others: the power of 2 for scientific notation is 2**(code-1023)
 
*** The largest number is thus just *under* 2**1024 (ends up being (2-2**-52)**1024<math>\approx 1.798\times 10^{308}</math>.
 
*** The smallest normal number (full precision) is 2**(-1022)<math>\approx 2.225\times 10^{-308}</math>.
 
*** The smallest denormal number (only one significant binary digit) is 2**(-1022)/2**53 or 5e-324.
 
** When adding or subtracting, Python can only operate on the common significant digits - meaning the smaller number will lose precision.
 
** (1+1e-16)-1=0 and (1+1e-15)-1=1.1102230246251565e-15
 
** Avoid intermediate calculations that cause problems: if x=1.7e308,
 
*** (x+x)/x is inf
 
*** x/x + x/x is 2.0
 
* List-building roundoff demonstration
 
<div class="mw-collapsible mw-collapsed">
 
<source lang=python>
 
# Roundoff Demo
 
</source>
 
<div class="mw-collapsible-content">
 
<source lang=python>
 
 
import numpy as np
 
import matplotlib.pyplot as plt
 
 
start = 10;
 
delta = 0.1;
 
finish = 100;
 
 
k = 0
 
val_c = [start];
 
val_a = [start];
 
 
while val_c[-1]+delta <= finish:
 
  val_c += [val_c[-1] + delta];
 
  k = k + 1
 
  val_a += [start + k*delta];
 
 
array_c = np.array(val_c)
 
array_a = np.array(val_a)
 
 
diffs = [val_c[k]-val_a[k] for k in range(len(val_a))]
 
 
plt.figure(1)
 
plt.clf()
 
#plt.plot(array_c - array_a, 'k-')
 
plt.plot(diffs, 'k-')
 
</source>
 
</div>
 
</div>
 
* Exponential demo program
 
<div class="mw-collapsible mw-collapsed">
 
<source lang=python>
 
# Exponential demo
 
</source>
 
<div class="mw-collapsible-content">
 
<source lang=python>
 
#%%
 
import numpy as np
 
import matplotlib.pyplot as plt
 
#%%
 
def exp_calc(x, n):
 
    return (1 + (x/n))**n
 
#%% Shows that apprxomation gets better as n increases
 
print(np.exp(1))
 
print(exp_calc(1, 1))
 
print(exp_calc(1, 10))
 
print(exp_calc(1, 100))
 
print(exp_calc(1, 1000))
 
print(exp_calc(1, 10000))
 
print(exp_calc(1, 100000))
 
#%% Shows that once n is too big, problems happen
 
n = np.logspace(0, 18, 1e3);
 
 
plt.figure(1)
 
plt.clf()
 
plt.semilogx(n, exp_calc(1, n), 'b-')
 
</source>
 
</div>
 
</div>
 
 
=== Lecture 10 ===
 
 
* Iterative solutions - where next approximation or guess is based on previous guess and some algorithm
 
** Series method for finding exponential <math>e^c</math>:<center><math>
 
e^x=\sum_{n=0}^{\infty}\frac{x^n}{n!}
 
</math>
 
</center>
 
so
 
<center><math>
 
y_{new}=y_{old}+\frac{x^n}{n!}
 
</math>
 
</center> and the initial guess for <math>y</math> is the <math>n=0</math> term, or 1.
 
:* Newton method for square roots - to find <math>y</math> where <math>y=\sqrt{x}</math>: <center><math>
 
y_{new}=\frac{y_{old}+\frac{x}{y_{old}}}{2}
 
</math>
 
::</center> and a good initial guess is <math>y=x</math>.  A bad initial guess is <math>y=0</math>
 
:* Chapra Figure 4.2 (as translated in Python) can be very helpful!  Now featuring 100% fewer semi-colons!
 
<div class="mw-collapsible mw-collapsed">
 
<source lang=python>
 
# Chapra Figure 4.2 from Applied Numerical Methods with MATLAB, 4th ed, translated into Python:</source>
 
<div class="mw-collapsible-content">
 
<source lang=python>
 
def iter_meth(x, es=0.0001, maxit=50):
 
    '''Maclaurin series of exponential function
 
    iter_meth(x,es,maxit)
 
    input:
 
    x = value at which series evaluated
 
    es = stopping criterion (default = 0.0001)
 
    maxit = maximum iterations (default = 50)
 
    output:
 
    fx = estimated value
 
    ea = approximate relative error (%)
 
    iter = number of iterations
 
    '''
 
 
    # initialization
 
    iter = 1
 
    sol = 1
 
    ea = 100
 
    # iterative calculation
 
    while 1:
 
        solold = sol;
 
        sol = sol + x ** iter / m.factorial(iter)
 
        iter = iter + 1
 
        if sol != 0:
 
            ea = abs((sol - solold)/sol)*100
 
       
 
        if ea<=es or iter>=maxit:
 
            break
 
 
    fx = sol
 
 
    return (fx, ea, iter)
 
</source>
 
</div>
 
</div>
 
:* Only two changes in 4.2 to go from exponential calculation to finding a square root
 
::* Change initial sol = 1 to sol = x to make x initial guess for square root
 
::* Change
 
:::<source lang=python>
 
sol = sol + x ** iter / m.factorial(iter)
 
</source>
 
:::line to:
 
:::<source lang=python>
 
sol = sol / 2 + x / (sol * 2)
 
</source>
 
=== Lecture 11 ===
 
* Debugging
 
* Turtle basics
 
 
=== Lecture 12 ===
 
See [[EGR_103/Fall_2018/Lec_12]]
 
 
=== Lectures 13 ===
 
Review
 
 
=== Lecture 14 ===
 
Test
 
 
=== Lecture 15 ===
 
* Use NumPy for linear algebra - specifically, create 2-d arrays using a list of list as an input argument to np.array()
 
* Dot product defines as the sum of the products of the corresponding elements in two arrays with the same number of elements
 
* Element-wise operations work with corresponding elements in two same-shape array or with an array and a scaler; examples include (*, /, +, -, **, trig, etc)
 
* Matrix multiplication is a collection of dot products between rows of the first array and columns of the second.  Pick a row from the first and a column from the second - the resulting dot product goes in the same row as picked from the first array and the same column as picked from the second array.
 
* In Python 3.5 and above, the @ operator will do matrix multiplication; the np.dot(a, b) command also does matrix multiplication if a and b are 2-D arrays.  If not using numpy, need to have a triple loop.
 
* See Chapter 8 in Chapra.
 
 
 
=== Lecture 16 ===
 
See [[EGR_103/Fall_2018/Lec_16]]
 
 
=== Lecture 17 ===
 
* Statistical definitions: mean, model, estimate, St, Sr, r2
 
* Coding statistics to quantify goodness of fit: [[Python:Fitting#Example_Code]]
 
* Differences between "mathematically good" (high <math>r^2</math>) versus "scientifically good"
 
 
=== Lecture 18 ===
 
* Recap of stats and polyfit/polyval code
 
* Reminder that mathematically best fit for a given model has smallest Sr value
 
* Linear algebra proof of how to calculate coefficients for best fit - punch line is to pre-multiply linear algebra equation by transpose of functions matrix
 
* Key code is to build functions matrix and use np.linalg.lstsq to calculate coefficients:
 
<syntaxhighlight lang='python'>
 
a_mat = np.block([[xv**1, xv**0]])
 
pvec = np.linalg.lstsq(a_mat, yv, rcond=None)[0]
 
</syntaxhighlight>
 
* See [[Python:Fitting#Example_Code_2]] for complete example
 
  
=== Lecture 19 ===
 
  
=== Lecture 20 - Zeros and Optimizations ===
 
* SciPy references (all from [https://docs.scipy.org/doc/scipy/reference/optimize.html Optimization and root finding]):
 
** [https://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.brentq.html scipy.optimize.brentq] - closed method root finding
 
** [https://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.fsolve.html scipy.optimize.fsolve] - open method root finding
 
** [https://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.fmin.html scipy.optimize.fmin] - unbounded minimization
 
** [https://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.fminbound.html scipy.optimize.fminbound] - bounded minimization
 
  
 
=== Lecture 21 - Basic Interpolation; Numerical Derivatives and Integrals ===
 
=== Lecture 21 - Basic Interpolation; Numerical Derivatives and Integrals ===

Revision as of 18:41, 8 November 2019

This page will be used to keep track of the commands and major concepts for each lecture in EGR 103.

Lecture 1 - Introduction

  • Class web page: EGR 103L; assignments, contact info, readings, etc - see slides on Errata/Notes page
  • Sakai page: Sakai 103L page; grades, surveys and tests, some assignment submissions
  • CampusWire page: CampusWire 103L page; message board for questions - you need to be in the class and have the access code to subscribe.

Lecture 2 - Programs and Programming

Lecture 3 - "Number" Types

  • Python is a "typed" language - variables have types
  • We will use eight types:
    • Focus of the day: int, float, and array
    • Focus a little later: string, list, tuple
    • Focus later: dictionary, set
  • int: integers; Python can store these perfectly
  • float: floating point numbers - "numbers with decimal points" - Python sometimes has problems
  • array
    • Requires numpy, usually with import numpy as np
    • Organizational unit for storing rectangular arrays of numbers
  • Math with "Number" types works the way you expect
    • ** * / // % + -
  • Relational operators can compare "Number" Types and work the way you expect with True or False as an answer
    • < <= == >= > !=
    • With arrays, either same size or one is a single value; result will be an array of True and False the same size as the array
  • Slices allow us to extract information from an array or put information into an array
  • a[0] is the element in a at the start
  • a[3] is the element in a three away from the start
  • a[:] is all the elements in a because what is really happening is:
    • a[start:until] where start is the first index and until is just *past* the last index;
    • a[3:7] will return a[3] through a[6] in 4-element array
    • a[start:until:increment] will skip indices by increment instead of 1
    • To go backwards, a[start:until:-increment] will start at an index and then go backwards until getting at or just past until.
  • For 2-D arrays, you can index items with either separate row and column indices or indices separated by commas:
    • a[2][3] is the same as a[2, 3]
    • Only works for arrays!

Lecture 4 - Other Types and Functions

  • Lists are set off with [ ] and entries can be any valid type (including other lists!); entries can be of different types from other entries
  • List items can be changed
  • Tuples are indicated by commas without square brackets (and are usually shown with parentheses - which are required if trying to make a tuple an entry in a tuple or a list)
  • Dictionaries are collections of key : value pairs set off with { }; keys can be any immutable type (int, float, string, tuple) and must be unique; values can be any type and do not need to be unique
  • To read more:
    • Note! Many of the tutorials below use Python 2 so instead of print(thing) it shows print thing
    • Lists at tutorialspoint
    • Tuples at tutorialspoint
    • Dictionary at tutorialspoint
  • Defined functions can be multiple lines of code and have multiple outputs.
    • Four different types of input parameters:
      • Required (listed first)
      • Named with defaults (second)
      • Additional positional arguments ("*args") (third)
        • Function will create a tuple containing these items in order
      • Additional keyword arguments ("**kwargs") (last)
        • Function will create a dictionary of keyword and value pairs
    • Function ends when indentation stops or when the function hits a return statement
    • Return returns single item as an item of that type; if there are multiple items returned, they are stored in a tuple
    • If there is a left side to the function call, it either needs to be a single variable name or a tuple with as many entries as the number of items returned

Lecture 5 - Format, Logic, Decisions, and Loops

Lecture 6 - String Things and Loops

  • ord to get numerical value of each character
  • chr to get character based on integer
  • map(fun, sequence) to apply a function to each item in a sequence
  • Basics of while loops
  • Basics of for loops
  • List comprehensions
    • [FUNCTION for VAR in SEQUENCE if LOGIC]
      • The FUNCTION should return a single thing (though that thing can be a list, tuple, etc)
      • The "if LOGIC" part is optional
      • [k for k in range(3)] creates [0, 1, 2]
      • [k**2 for k in range (5, 8)] creates [25, 36, 49]
      • [k for k in 'hello' if k<'i'] creates ['h', 'e']
      • [(k,k**2) for k in range(11) if k%3==2] creates [(2, 4), (5, 25), (8, 64)]
    • Wait - that's the simplified version...here:
  • Want to see Amharic?
list(map(chr, range(4608, 4992)))
  • Want to see the Greek alphabet?
for k in range(913,913+25):
    print(chr(k), chr(k+32))

Lecture 7 - Applications

# tpir.py from class:
import numpy as np
import time

def create_price(low=100, high=1500):
    return np.random.randint(low, high+1)
    
def get_guess():
    guess = int(input('Guess: '))
    return guess
    
def check_guess(actual, guess):
    if actual > guess:
        print('Higher!')
    elif actual < guess:
        print('Lower!')

    
if __name__ == '__main__':
    #print(create_price(0, 100))
    the_price = create_price()
    the_guess = get_guess()
    start_time = time.clock()
    #print(the_guess)
    while the_price != the_guess and (time.clock() < start_time+30):
        check_guess(the_price, the_guess)
        the_guess = get_guess()
    
    if the_price==the_guess:    
        print('You win!!!!!!!')
    else:
        print('LOOOOOOOOOOOOOOOSER')
# nato_trans.py from class:
fread = open('NATO.dat', 'r')

d = {}

for puppies in fread:
    #print(puppies) $ if you want to see the whole line
    
    #key = puppies[0]
    #value = puppies[:-1]
    #d[key] = value
    
    d[puppies[0]] = puppies[:-1]

fread.close()

hamster = input('Word: ').upper()

for kittens in hamster:
    #print(d[letter], end=' ')
    print(d.get(kittens, 'XXX'), end=' ')
    
'''
In class - one question was "in cases where there is not a code, can it
return the original value instead of XXX" -- yes:
    print(d.get(kittens, kittens))
'''
  • Data file we used:
# NATO.dat from class:
Alfa
Bravo
Charlie
Delta
Echo
Foxtrot
Golf
Hotel
India
Juliett
Kilo
Lima
Mike
November
Oscar
Papa
Quebec
Romeo
Sierra
Tango
Uniform
Victor
Whiskey
X-ray
Yankee
Zulu

Lecture 8 - Taylor Series and Iterative Solutions

  • Taylor series fundamentals
  • Maclaurin series approximation for exponential uses Chapra 4.2 to compute terms in an infinite sum.
\( y=e^x=\sum_{n=0}^{\infty}\frac{x^n}{n!} \)
so
\( \begin{align} y_{init}&=1\\ y_{new}&=y_{old}+\frac{x^n}{n!} \end{align} \)
  • Newton Method for finding square roots uses Chapra 4.2 to iteratively solve using a mathematical map. To find \(y\) where \(y=\sqrt{x}\):
    \( \begin{align} y_{init}&=1\\ y_{new}&=\frac{y_{old}+\frac{x}{y_{old}}}{2} \end{align} \)
  • See Python version of Fig. 4.2 and modified version of 4.2 in the Resources section of Sakai page under Chapra Pythonified

Lecture 9 - Binary and Floating Point Numbers

  • Different number systems convey information in different ways.
    • Roman Numerals
    • Chinese Numbers
    • Ndebe Igbo Numbers
    • Binary Numbers
      • We went through how to convert between decimal and binary
    • Kibibytes et al
  • "One billion dollars!" may not mean the same thing to different people: Long and Short Scales
  • Floats (specifically double precision floats) are stored with a sign bit, 52 fractional bits, and 11 exponent bits. The exponent bits form a code:
    • 0 (or 00000000000): the number is either 0 or a denormal
    • 2047 (or 11111111111): the number is either infinite or not-a-number
    • Others: the power of 2 for scientific notation is 2**(code-1023)
      • The largest number is thus just *under* 2**1024 (ends up being (2-2**-52)**1024\(\approx 1.798\times 10^{308}\).
      • The smallest normal number (full precision) is 2**(-1022)\(\approx 2.225\times 10^{-308}\).
      • The smallest denormal number (only one significant binary digit) is 2**(-1022)/2**53 or 5e-324.
    • When adding or subtracting, Python can only operate on the common significant digits - meaning the smaller number will lose precision.
    • (1+1e-16)-1=0 and (1+1e-15)-1=1.1102230246251565e-15
    • Avoid intermediate calculations that cause problems: if x=1.7e308,
      • (x+x)/x is inf
      • x/x + x/x is 2.0
  • In cases where mathematical formulas have limits to infinity, you have to pick numbers large enough to properly calculate values but not so large as to cause errors in computing:
    • $$e^x=\lim_{n\rightarrow \infty}\left(1+\frac{x}{n}\right)^n$$
# Exponential Demo

<syntaxhighlightlang=python> import numpy as np import matplotlib.pyplot as plt

def exp_calc(x, n):

   return (1 + x/n)**n

if __name__ == "__main__":

   n = np.logspace(0, 17, 1000)
   y = exp_calc(1, n)
   fig, ax = plt.subplots(num=1, clear=True)
   ax.semilogx(n, y)
   fig.savefig('ExpDemoPlot1.png')
   
   # Focus on right part
   n = np.logspace(13, 16, 1000)
   y = exp_calc(1, n)
   fig, ax = plt.subplots(num=2, clear=True)
   ax.semilogx(n, y)
   fig.savefig('ExpDemoPlot2.png')

</syntaxhighlight>

Lecture 10 - Monte Carlo Methods

  • See walk1 in Resources section of Sakai

Lecture 11 - Style, Code Formatters, Docstrings, and More Walking

  • Discussion of PEP and PEP8 in particular
  • Autostylers include black, autopep8, and yapf -- we will mainly use black
    • To get the package:
      • On Windows start an Anaconda Prompt (Start->Anaconda3->Anaconda Prompt) or on macOS open a terminal and change to the \users\name\Anaconda3 folder
      • pip install black should install the code
    • To use that package:
      • Change to the directory where you files lives. On Windows, to change drives, type the driver letter and a colon by itself on a line, then use cd and a path to change directories; on macOS, type cd /Volumes/NetID where NetID is your NetID to change into your mounted drive.
      • Type black FILE.py and note that this will actually change the file - be sure to save any changes you made to the file before running black
      • As noted in class, black automatically assumes 88 characters in a line; to get it to use the standard 80, use the -l 80 adverb, e.g. black FILE.py -l 80
  • Docstrings
    • We will be using the numpy style at docstring guide
    • Generally need a one-line summary, summary paragraph (if needed), a list of parameters, and a list of returns
    • Specific formatting chosen to allow Spyder's built in help tab to format file in a pleasing way
  • More walking
    • We went through the walk_1 code again and then decided on three different ways we could expand it and looked at how that might impact the code:
    • Choose from more integers than just 1 and -1 for the step: very minor impact on code
    • Choose from a selection of floating point values: minor impact other than a bit of documentation since ints and floats operate in similar ways
    • Walk in 2D rather than along a line: major impact in terms of needing to return x and y value for the step, store x and y value for the location, plot things differently
    • All codes from today will be on Sakai in Resources folder

Lecture 12 - Arrays and Matrix Representation in Python

  • 1-D and 2-D Arrays
    • Python does mathematical operations differently for 1 and 2-D arrays
  • Matrix multiplication (by hand)
  • Matrix multiplication (using @ in Python)
  • To multiply matrices A and B ($$C=A\times B$$ in math or C=A@B in Python) using matrix multiplication, the number of columns of A must match the number of rows of B; the results will have the same number of rows as A and the same number of columns as B. Order is important
  • Setting up linear algebra equations
  • Determinants of matrices and the meaning when the determinant is 0
    • Shortcuts for determinants of 1x1, 2x2 and 3x3 matrices (see class notes for processes)
$$\begin{align*} \mbox{det}([a])&=a\\ \mbox{det}\left(\begin{bmatrix}a&b\\c&d\end{bmatrix}\right)&=ad-bc\\ \mbox{det}\left(\begin{bmatrix}a&b&c\\d&e&f\\g&h&i\end{bmatrix}\right)&=aei+bfg+cdh-afh-bdi-ceg\\ \end{align*}$$

Lecture 13 - Linear Algebra and Solutions

  • Inverses of matrices:
    • Generally, $$\mbox{inv}(A)=\frac{\mbox{cof}(A)^T}{\mbox{det}(A)}$$ there the superscript T means transpose...
      • And $$\mbox{det}(A)=\sum_{i\mbox{ or }j=0}^{N-1}a_{ij}(-1)^{i+j}M_{ij}$$ for some $$j$$ or $$i$$...
        • And $$M_{ij}$$ is a minor of $$A$$, specifically the determinant of the matrix that remains if you remove the $$i$$th row and $$j$$th column or, if $$A$$ is a 1x1 matrix, 1
          • And $$\mbox{cof(A)}$$ is a matrix where the $$i,j$$ entry $$c_{ij}=(-1)^{i+j}M_{ij}$$
    • Good news - for this class, you need to know how to calculate inverses of 1x1 and 2x2 matrices only:
$$ \begin{align} \mbox{inv}([a])&=\frac{1}{a}\\ \mbox{inv}\left(\begin{bmatrix}a&b\\c&d\end{bmatrix}\right)&=\frac{\begin{bmatrix}d &-b\\-c &a\end{bmatrix}}{ad-bc} \end{align}$$
  • Converting equations to a matrix system:
    • For a certain circuit, conservation equations learned in upper level classes will yield the following two equations:
$$\begin{align} \frac{v_1-v_s}{R1}+\frac{v_1}{R_2}+\frac{v_1-v_2}{R_3}&=0\\ \frac{v_2-v_1}{R_3}+\frac{v_2}{R_4}=0 \end{align}$$
  • Assuming $$v_s$$ and the $$R_k$$ values are known, to write this as a matrix equation, you need to get $$v_1$$ and $$v_2$$ on the left and everything else on the right:
$$\begin{align} \left(\frac{1}{R_1}+\frac{1}{R_2}+\frac{1}{R_3}\right)v_1+\left(-\frac{1}{R_3}\right)v_2&=\frac{v_s}{R_1}\\ \left(-\frac{1}{R_3}\right)v_1+\left(\frac{1}{R_3}+\frac{1}{R_4}\right)v_2&=0 \end{align}$$
  • Now you can write this as a matrix equation:

$$ \newcommand{\hmatch}{\vphantom{\frac{1_s}{R_1}}} \begin{align} \begin{bmatrix} \frac{1}{R_1}+\frac{1}{R_2}+\frac{1}{R_3} & -\frac{1}{R_3} \\ -\frac{1}{R_3} & \frac{1}{R_3}+\frac{1}{R_4} \end{bmatrix} \begin{bmatrix} \hmatch v_1 \\ \hmatch v_2 \end{bmatrix} &= \begin{bmatrix} \frac{v_s}{R_1} \\ 0 \end{bmatrix} \end{align}$$

Lecture 14 - Solution Sweeps, Norms, and Condition Numbers

  • See Python:Linear_Algebra#Sweeping_a_Parameter for example code on solving a system of equations when one parameter (either in the coefficient matrix or in the forcing vector or potentially both)
  • Chapra 11.2.1 for norms
  • Chapra 1.2.2 for condition numbers
    • np.linalg.cond() in Python
    • Note: base-10 logarithm of condition number gives number of digits of precision possibly lost due to system geometry and scaling (top of p. 295 in Chapra)

Lecture 15

  • Test Review

Lecture 16

  • Test

Lecture 17 - Statistics and Curve Fits

  • Definition of curve fitting versus interpolation:
    • Curve fitting involves taking a scientifically vetted model, finding the best coefficients, and making predictions based on the model. The model may not perfectly hit any of the actual data points.
    • Interpolation involves making a guess for values between data points. Interpolants actually hit all the data points but may have no scientific validity at all. Interpolation is basically "connecting the dots," which may involve mathematically complex formulae.
  • Statistical definitions used (see Statistics Symbols for full list):
    • $$x$$ will be used for independent data
    • $$y$$ will be used for dependent data
    • $$\bar{x}$$ and $$\bar{y}$$ will be used for the averages of the $$x$$ and $$y$$ sets
    • $$\hat{y}_k$$ will be used for the estimate of the $$k$$th dependent point
    • $$S_t=\sum_k\left(y_k-\bar{y}\right)^2$$ is the sum of the squares of the data residuals and gives a measure of the spread of the data though any given value can mean several different things for a data set. It will be a non-negative number; a value of 0 implies all the $$y$$ values are the same.
    • $$S_r=\sum_k\left(y_k-\hat{y}_k\right)^2$$ is the sum of the squares of the squares of the estimate residuals and gives a measure of the colletive distance between the data points and the model equation for the data points. It will be a non-negative number; a value of 0 implies all the estimates are mathematically perfectly predicted by the model.
    • $$r^2=\frac{S_t-S_r}{S_t}=1-\frac{S_r}{S_t}$$ is the coefficient of determination; it is a normalized value that gives information about how well a model predicts the data. An $$r^2$$ value of 1 means the model perfectly predicted every value in the data set. A value of 0 means the model does as well as having picked the average. A negative value means the model is worse than merely picking the average.

Lecture 18 - More statistics and curve fitting

Lecture 19 - 3D Plotting

Lecture 20 - Roots of Equations

Lecture 21 - Roots and Extrema