Wednesday 5 December 2012

DFSA and NFSA

I find this funny: for NFSA, there can be one state which when meeting an input, it can go to two different states. I think this is just like writing a recursive function, which only returns true or false, and only use return statement. I remember doing this in CSC148 for a littile while, and I got really confused by then, and still now. But NFSA draws out the picture and explains it more clearly.

Although have not encountered with this kind of question, but if in the final exam, there is a question that gives you a NFSA, and a really long string asks you to determine if the NFSA accepts this string. Then the second part of the question gives you a regular expression and you have to argue if the regular expression can be represented as the NFSA given. This kind of question would be really hard to argue, since there will be tons of cases each time you have two paths to choose from, and each path have more sub-paths to follow, and it goes on. I just hope this won't happen at the final exam.....

Also we talked about that any two or more DFSA combined can be represented by one NFSA, but letting the start point have an epsilon transition to another state, which is the start point of the second DFSA. So we can break a big question to chunks of small and easier parts, and using the appropriate transition to combine them, and this can be done since we are drawing a NFSA not a DFSA. But then even the smaller chunk of the regular expression may not be so easy, so I think all we can do is practice more.

Also a3 comes out. The first question involves something we have not specifically learnt in class, but a combination of things we have learnt. I think using a contradiction to prove it will be the easiest way doing so. Question 2 and 4 involves loops, which I hate! But just follow the course notes would not be so hard, but just a lot of typing and work. Question 3 can be found inside the course textbook(?), and just follow the steps it shows will give us the right answer.


midterm + automata and languages

yeah! Second midterm result comes out, and I am happy about my mark. But I just have one little question about the last question on the test. It asks you to change the function so that it will take less time to complete the funcion. So I changed the last line of the loop, the multiplying the same thing twice to a "^2", or "**2" for computer language. I got the question right but it got me thinking, we are using the built-in function pow when we do "**2", then can we just change the whole function to one statement by passing the parameters into the built-in function. If not, then I think I have done something wrong for this question, which i think a better answer will be changing the last line to a pow(pow(),2), which also takes up a constant time as the base. But anyways...midterm was good.

Today, Danny talked about the automata and languages. During lecture, I lost my focus by the midterm.....after reading the slides on my own, I think it is pretty much the same as the patterns I have learned in CSC207, or regular expression.

The "state table" (in the slide, and I dont know what it is called) was hard to understand at first, since I dont know what the value goes in there are, but the DFSA really helps. the value inside the table means that if the state reads an input, what should it bring it to(one of the states). Then we talked about L, the language in English, and L(R), the regular expression. I find it inconvenient and confusing to use L for both cases.

Also, I think the proving techniques have nothing to do with creating the DFSA. It is just like writing a program, we have to think about each cases, and deal with them. Some cases have the same result, in which we put them into the same statement. I think it is just a matter of thinking(of course!), and in the exam, if you want to finish this kind of question really fast, you will have to do lots of samples before hand.

There is more loops for this week, how fun!

just as the title says, there is more loops for this week, how fun! ..... I have done some reading since, and I figured that just proving a recursive function is not so hard, by using induction, well..., at least for some recursive functions.

So basically for a recursive function, there will be some "base case", which will terminate the program, and return or not, depending on the program, the value to the user. This part can just easily go to the induction step "base case". Hey, thier same names might have suggested something already! Then for the induction step, you assume either P(n) or <P(0)...P(n-1)>, depending on the recursive function. Then you see that n+1 or n does not fall into the base case part, so you can go the "if statements". Each if statement corresponds to one case, and inside the loop, it will do something before doing a recursive call, usually the position of start, or end, and in either form of a statement or changing the parameters.

Then you just argue that if the statement or the change of parameteres make any sense, if so, since the recursive call has a smaller size now, the recursive call must be correct by the assumption, then we can conclude that if the recursive call is right.

It didn't occur to me that a recursive function is just like a induction until I do some past tests, which I reluctantly review at first, but later become happy since I found out that the test has somewhat the same question come out of it. This new way of thinking kind of excites me, and yes i am a nerd, shut up! but I really think that it will come in handy some day.

LOOPS

First thing I am going to say is that I dislike analyzing a loop, or a recursion, especially proving them if they are correct or wrong. For example, the mergesort:

MergeSort(A,b,e):
if b == e: return
m = (b + e) / 2
MergeSort(A,b,m)
MergeSort(A,m+1,e)
# merge sorted A[b..m] and A[m+1..e] back into A[b..e]
for i = b,...,e: B[i] = A[i]
c = b
d = m+1
for i = b,...,e:
        if d > e or (c <= m and B[c] < B[d]):
                A[i] = B[c]
                c = c + 1
        else: # d <= e and (c > m or B[c] >= B[d])
                A[i] = B[d]
                d = d + 1

this is the example given in class.

first, we want to know the big theta of this loop, which is ok. Just unwinding it, using special case, (2^n, 3^n,...etc), and we will get the answer, no big deal.

Then we are to prove that T is non-decreasing, which is a little bit more annoying than the previous step. We have to show that it is bounded bewtween 2^(floor(n)) and 2^(ceiling(n)). Then we are pretty much done.

So that is it for the big theta, this is ok, some works

then we are going to the analyze the loop recursive binary search, this is what I dont like the most, maybe of all the course materials. so let's see the example in course

def recBinSearch(x, A, b, e) :
if b == e :
        if x <= A[b] :
                return b
        else :
                return e + 1
else :
        m = (b + e) // 2 # midpoint
        if x <= A[m] :
                return recBinSearch(x, A, b, m)
        else :
                return recBinSearch(x, A, m+1, e)

From CSC148, we know this is a correct recursive function,  but in CSC165, we only learned to prove if a loop is correct, so to prove a recursive function is brand new to me, but i figure it should be the same as to prove a loop. And no surprise, it is pretty much the same.... but this time, we were to arise with our own precodition, loop invariant, and the postcondition. Then we have to check if the input were comparable from input, for precondition, and check what the outcome should be for the postcondition, and then by having the precondition and the postcondition, we were to derive a loop invariant so that it will nicely bring us from preconditition to the postcondition...

This is not really hard to do, but there are extremely amount of work you have to do.
Especially the loop invariants, which sometimes are really hard to check them all at once, when there are many.

I know we have to learn this so that some day we can analyze others' codes, but it is just not my favourite thing.








Tuesday 16 October 2012

Midterm

It's just after midterm and one lecture from last post, so leaving me no choice but talk about them.

So it's after the midterm, which I find fairly easy. The first question and third question are pretty similar, as they can both be solved using the same technique, by pulling out the (n+1) term out of the sum, and leaves the rest as the same format as the hypothesis. The second question was a bit harder than the other two, in my opinion, since I had a little trouble explaining it using symbols and words. I got the idea of how to use complete induction to solve the problem, but along the way, I find it really hard to explain to the reader my way of thinking and logic. I even invented some function to help me do it better, such as edge(n) = all the edges in the tree with n nodes, because to me, pure words can't really explain much, but if I write down the math equations to show that it follows, it's more convincing.

Then, after one hour of midterm, we had more lecture, which was hard to listen to as my mind was still all over the midterm questions. I copied down some notes, but they were merely useless since I didn't copy down loop, but just the proof parts. Luckily, I could find the notes on the course website. After reading some notes, I think I get the proof, but I think I would use the csc165 way to do it, more familiar. Anyways, I think I will get used to the new way too.

Monday 8 October 2012

First Post

Since this is the first post, I am going to talk about what I learned in the first few weeks. First lecture was about the mathematical induction, nothing too difficult. It is pretty much the same thing as what I learnt in CSC165. Then we learnt about the complete induction, which was a new thing to me. At first I don't really get it, how by proving just the base case, and assume everything up to n-1 is true, then prove n is true, the statement will be true. However, after Danny do a few examples, I kind of get the hang of it. It is not much of a difference than the mathematical induction. Since the base case is true, so if the next one can be broken into the composition of the base case, it must be true. Then just like dominoes, if n can be broken into the composition of smaller ones, it must be true also.

However, there is a little something confused me. It was the question about the postage of 3 and 5 cents that can make up every number after the the first few. What solution I would come up with would be proving 8 9 10 are true, and since adding 3s to each one, I can make up everything after. Another solution Danny suggested was to prove 8 is right, then assume everything up to n-1 is true, and assume every number (after the first few) have at least one 5 or three 3s, then prove n is true would complete the solution. This seems strange to me, since we assume that the numbers have at least one 5 or three 3s. Then what I have proved is that every number has one 5 or three 3s works. But what if the ones that do not have one 5 or three 3s, although there are none, but we didn't prove it.

Anyway, after that we have well ordering, then structural induction, which are the proofs I have learnt in math courses, so not so big of a problem to me. Lastly, the Fibonacci sequence, which is something I am not so familiar with, but still manageable, just need to be careful of the base cases, as it only works for n greater than two. And this ends my first post of CSC236 blog.