Thursday, March 29, 2007

If you are planning to do the exam at home again--here are instructions


So, here is the standard FAQ on
at-home-version-of-the-in-class exam

Let me know if you have questions (if you are afraid of asking them,
you can use http://rakaposhi.eas.asu.edu/cgi-bin/mail?rao


0. What are the ground rules for doing this--

Only that (a) you have not talked to anyone about the exam and (b) you
have to submit it at the ***beginning of the class
on Tuesday 4/3****

1. Do I do just the parts I thought I didn't do well or the whole exam?


You have to do the whole exam (see below as to why)


2. Do I lose anything if I don't do it at home?

No (okay--you do lose the satisfaction of doing it twice;-). Your
grade in the in-class test will stand.

3. How is the effective midterm grade computed?


Eff = max( in-class; w*in-class+(1-w)*at-home )

4. What is the range of w?

0.5 < w <1

(typical values in the past ranged between .6 and .666)

5. But if everyone else does it at home and improves their grade, and

I decide to watch Seinfeld reruns, don't I lose out?

No. First of all, *nobody* ever loses out by watching Seinfeld reruns
(Channel 10; week nights 10:30 and again at 11:30; also Channel 14 on
Rao's TV).


The difference between your inclass score and the Eff score will be
considered as your _extra credit_ on the mid term (and thus those
points wont affect grade cutoffs).


6. How do you device these ludicrously complex schemes?


This is the only way of making Xcel do some real work.


------------------------
Rao

Wednesday, March 28, 2007

Fwd: office hours before midterm



---------- Forwarded message ----------
From: Subbarao Kambhampati <subbarao2z2@gmail.com>
Date: Mar 28, 2007 9:23 AM
Subject: office hours before midterm
To: Nanan <nanan9177@gmail.com>

I will be available between 3-4pm today for any consultations. I will also
be available between 1-2pm tomorrow. If I am in my office at other times, you are
welcome to ask me questions.

rao


On 3/27/07, Nanan <nanan9177@gmail.com> wrote:
Dear Dr. Kambhampati,
 
Will you hold office hours for midterm exam? Thank you.
 
Plus, what is the meaning of "A moment of order m exists only if r>m+1"?

Mean is considered the first moment, and variance the second moment etc of a statistical distribution
You can define higher order moments in terms of cubes etc.


Best Wishes,
Nan

Tuesday, March 27, 2007

Reminder: You will have to submit the review/questions of the paper on collaborative filtering in class today



Folks
 
 Please come prepared with a short review and questions regarding the paper on collaborative filtering on google news. You will use this to bring up questiosn during class discussion. This sheet will also  be turned in at the end of the class and will become part of homework 3.

Rao






 


--
Posted By Subbarao Kambhampati to CSE494/598 Spring 2007 Blog at 3/22/2007 11:21:00 PM

Midterm Syllabus + Solutions for Homework 2

Folks
 Midterm will cover all the topics covered in homeworks 1 and 2. This includes all the lectures up to and including the lecture of March 1st.

Solutions for the homework 2 are posted and are available from the homework page
 (here is the direct link http://rakaposhi.eas.asu.edu/cse494/hw2-s07-solns )

rao

Sunday, March 25, 2007

Re: specimen midterm for CSE494

Sorry I had forgotten to put cse494 in the URL

Both these URLs will work now


http://rakaposhi.eas.asu.edu/s07-specimen-exam.pdf


http://rakaposhi.eas.asu.edu/cse494/s07-specimen-exam.pdf



On 3/25/07, Zheshen Wang <Zheshen.Wang@asu.edu > wrote:
The link doesn't work...
 
thanks
 
Zheshen(Jessie)
 
2007/3/25, Subbarao Kambhampati <rao@asu.edu>:

Folks
 The specimen midterm from a previous offering of the course is made available at

http://rakaposhi.eas.asu.edu/s07-specimen-exam.pdf

This should give you some idea about how the midterm might be structured.

regards
rao






--
---------------------------------------------------
Wang, Zheshen
Computer Science Department
Arizona State University

specimen midterm for CSE494


Folks
 The specimen midterm from a previous offering of the course is made available at

http://rakaposhi.eas.asu.edu/s07-specimen-exam.pdf

This should give you some idea about how the midterm might be structured.

regards
rao



Friday, March 23, 2007

Midterm on next Thursday--in class

Folks
 As I mentioned on Tuesday's class, we will have the mid-term next Thursday in class.
(Since 3/30 is the drop date, I think it makes sense to have the exam before the drop date).

regards
rao

Thursday, March 22, 2007

Blog Task for next class+Discouraging dim sum dining (in class attendance..)

Folks
 
 Please come prepared with a short review and questions regarding the paper on collaborative filtering on google news. You will use this to bring up questiosn during class discussion. This sheed will also  be turned in at the end of the class and will become part of homework 3.

Also, here is a link to the adversarial classification problem I mentioned (I also added it to the readings list)

http://www.cs.washington.edu/homes/pedrod/papers/kdd04.pdf

Finally, I would like to remind you that attendance is not optional in this class. If you are registered and/or requested permission to attend, you are expected to attend consistently (just as I am expected to show up consistently).

regards
Rao






 

Thursday, March 15, 2007

Article on video search

Nice article on the current trend of video search. Read it when you have time. http://www.msnbc.msn.com/id/17596108/

Newton

Wednesday, March 14, 2007

**important** regarding project B

Hi everyone,
The last hashedlinks file that was uploaded had a minor bug. So a new hashedlinks file has been uploaded as part of project_task_B.zip file. I am also attaching a new hashedlinks file with this email.
 
You need to replace the old hashedlinks file with this one.
 
Bhaumik

Friday, March 9, 2007

Clarification on K-medoid algorithm

Regarding the discussion on K-medoid algorithm, the standard idea seems to be to
consider one of the data points in the cluster as the medoid (so it is the most representative
datapoint). The way the most representative data point is chosen may vary, but the most reasonable
idea (that is resistant to outliers) is to pick the point that has the lowest cumulative distance to all other
points. Since this is a costly operation, sometimes it is done only on a sample of the points in the cluster.
Here is the algorithm:

Basic K-medoid Algorithm:

1. Select K points as the initial medoids.
2. Assign all points to the closest medoid.
3. See if any other point is a "better" medoid (i.e, has the lowest average distance to all other points)
       Finding a better medoid involves comparing all pairs of medoid and non-medoid points and is relatively inefficient
             .–Sampling may be used.
4. Repeat steps 2 and 3 until the medoids don't change.

Rao

ps: Here is a paper that does use this type of clustering:
  http://coblitz.codeen.org:3125/citeseer.ist.psu.edu/cache/papers/cs/27046/http:zSzzSzwww-faculty.cs.uiuc.eduzSz~hanjzSzpdfzSzvldb94.pdf/ng94efficient.pdf


Thursday, March 8, 2007

Median of 2 Dimensional Vectors

We had a discussion in class earlier today about whether the median of a set should actually be an element from the set in question itself; and if this were the case, the method described in class of taking the median of 2 vectors might not necessarily hold true because taking the median of the x components and the median of the y components and combining them might result in a vector not originally present in the set.

However, it turns out that even for 1-dimensional data, whenever we have an even number of elements in the set, the median always turns out to be the mean of the 2 middle elements (when the set is arranged in non-increasing or non-decreasing order).

Therefore the idea put forward in class would seem reasonable, that even if the "median vector" itself is not part of the set, it can be thought of as the median.

Homework 3 socket opened..

Homework 3 socket has been opened and two questions on clustering have been added.
You may want to print it on laminated paper so it won't get wet as you are working on
the problems while relaxing on the beach...

rao


Tuesday, March 6, 2007

On the importance of skepticism in reading papers....

Folks
  For those of you who have uploaded your comments on the anatomy paper, thanks. The rest of you, you should!

One thing I should tell you all is that you need to be more critical and skeptical in reading papers. Too may of the
reviews seem to be "hagiographies" and those are not particularly useful...

cheers
rao

An example of using K-means to do grading..

Folks

In the class today, I mentioned about how K-means is a good fit for
converting numerical scores into grades (since the number of clusters
is known).

Some years back (back when there were still A/B/C/D/E grades), I sent
the following mail to the 494 class to illustrate the use of K-means
to associate letter grades to their actual mid-term marks. Although
the impact is not as great on you since it is not _your_ marks that
are being clustered, you can still, I am sure, feel sympathetic pangs
of understanding.. (this also works well as a nice way to understand
all of K-means properties)

(You may also note, from the example below, that the distribution of
marks on my exams tend to have very high variance...)

yours in questionable taste
Rao


===============================

The midterm has been graded. The stats are: avg: 29 standard
deviation:14.5
33 people took the exam which was for 63 points (not 65, as i seem to
have miscounted)
There are 1 above 60
one between 50-60
4 between 40-50
8 between 40-30
8 between 30-20
7 between 20-10
4 below 10


I will return the exams at the end of the class on Monday. don't ask
me for marks before that.


Now I am sure you all want to know what would be "A", "B" etc cut offs
just for this exam.


I thought we can use the k-means algorithm to do clustering. After
all, the usual problem with k-means, that it requires us to
pre-specify number of clusters is not really a problem here since we
are
going to have a fixed number of letter grades anyway.


So, I wrote up a little (lisp) program for k-means and ran it on the
list of 33 individual scores. Here, for your edification are the
results:

Case 1: 5 clusters (A,B,C,D,E)

case 1
USER(218): (k-means mlist 5 :key #'mark-val)


>>>>((61.5) (38) (32) (26) (17.5))
>>>>((61.5 55) (48 47.5 47.5 47.5 38 37 35) (34.5 32.5 32.5 32 30 29)
(28 27 27 26 25.5 22.5)
(20.5 19 18 17.5 17 13.5 13 11.5 9.5 8.5 7 4)) --Dissimilarity
Measure:113.07143
>>>>((61.5 55) (48 47.5 47.5 47.5 38) (37 35 34.5 32.5 32.5 32 30 29)
(28 27 27 26 25.5 22.5 20.5)
(19 18 17.5 17 13.5 13 11.5 9.5 8.5 7 4)) --Dissimilarity
Measure:97.791214
>>>>((61.5 55) (48 47.5 47.5 47.5) (38 37 35 34.5 32.5 32.5 32 30) (29
28 27 27 26 25.5 22.5 20.5 19)
(18 17.5 17 13.5 13 11.5 9.5 8.5 7 4)) --Dissimilarity
Measure:88.91668
>>>>((61.5 55) (48 47.5 47.5 47.5) (38 37 35 34.5 32.5 32.5 32 30) (29
28 27 27 26 25.5 22.5 20.5 19)
(18 17.5 17 13.5 13 11.5 9.5 8.5 7 4)) --Dissimilarity
Measure:88.91668


;;Notice that the k-means starts with some default cluster centers and
;;iterates until clusters stablize. So the last ">>>" above is the
;;cluster we get.
;;(if we stopped with the first iteration people with 34.5 would have
;;gotten A ;-)


That dissimilairty measure is reduced from iteration to iteration
in each run


[[For the rest of the examples--except Case 1', I don't show cluster
dissimilarity measure]]


Case 2: 4 clusters (A,B,C,D) (assume I decided not to give Es)
USER(162): (k-means mlist 4 :key #'mark-val)


>>>>((61.5) (35) (27) (17.5))
>>>>((61.5 55) (48 47.5 47.5 47.5 38 37 35 34.5 32.5 32.5 32) (30 29
28 27 27 26 25.5 22.5)
(20.5 19 18 17.5 17 13.5 13 11.5 9.5 8.5 7 4))
>>>>((61.5 55) (48 47.5 47.5 47.5 38 37 35 34.5) (32.5 32.5 32 30 29
28 27 27 26 25.5 22.5 20.5)
(19 18 17.5 17 13.5 13 11.5 9.5 8.5 7 4))
>>>>((61.5 55) (48 47.5 47.5 47.5 38 37 35) (34.5 32.5 32.5 32 30 29
28 27 27 26 25.5 22.5 20.5)
(19 18 17.5 17 13.5 13 11.5 9.5 8.5 7 4))
>>>>((61.5 55) (48 47.5 47.5 47.5 38 37) (35 34.5 32.5 32.5 32 30 29
28 27 27 26 25.5 22.5 20.5)
(19 18 17.5 17 13.5 13 11.5 9.5 8.5 7 4))
>>>>((61.5 55) (48 47.5 47.5 47.5 38 37) (35 34.5 32.5 32.5 32 30 29
28 27 27 26 25.5 22.5)
(20.5 19 18 17.5 17 13.5 13 11.5 9.5 8.5 7 4))
>>>>((61.5 55) (48 47.5 47.5 47.5 38 37) (35 34.5 32.5 32.5 32 30 29
28 27 27 26 25.5 22.5)
(20.5 19 18 17.5 17 13.5 13 11.5 9.5 8.5 7 4))


Case 3: 4 clusters (A,B,C,D) but we remove the highest (61.5) from
consideration (assume that we give that person an A+ or just
physically push
that person out of the class for the sake of collective happiness).


SER(208): (k-means (cdr mlist) 4 :key #'mark-val)

>>>>((55) (34.5) (27) (17))
>>>>((55 48 47.5 47.5 47.5) (38 37 35 34.5 32.5 32.5 32) (30 29 28 27
27 26 25.5 22.5)
(20.5 19 18 17.5 17 13.5 13 11.5 9.5 8.5 7 4))
>>>>((55 48 47.5 47.5 47.5) (38 37 35 34.5 32.5 32.5 32) (30 29 28 27
27 26 25.5 22.5 20.5)
(19 18 17.5 17 13.5 13 11.5 9.5 8.5 7 4))
>>>>((55 48 47.5 47.5 47.5) (38 37 35 34.5 32.5 32.5 32) (30 29 28 27
27 26 25.5 22.5 20.5)
(19 18 17.5 17 13.5 13 11.5 9.5 8.5 7 4))

As expected, with 61.5 tossed out of the class, a lot more people get
As ;-)

Case 1': Clustering with a different set of initial centers
we will repeat case 3, except we start with different centers


case 1'
USER(219): (k-means mlist-r 5 :key #'mark-val)


>>>>((35) (32) (26) (18) (17))
>>>>((61.5 55 48 47.5 47.5 47.5 38 37 35 34.5) (32.5 32.5 32 30 29)
(28 27 27 26 25.5 22.5) (20.5 19 18 17.5)
(17 13.5 13 11.5 9.5 8.5 7 4)) --Dissimilarity Measure:117.0
>>>>((61.5 55 48 47.5 47.5 47.5) (38 37 35 34.5 32.5 32.5 32 30 29)
(28 27 27 26 25.5 22.5) (20.5 19 18 17.5 17)
(13.5 13 11.5 9.5 8.5 7 4)) --Dissimilarity Measure:82.19365
>>>>((61.5 55 48 47.5 47.5 47.5) (38 37 35 34.5 32.5 32.5 32 30) (29
28 27 27 26 25.5 22.5) (20.5 19 18 17.5 17)
(13.5 13 11.5 9.5 8.5 7 4)) --Dissimilarity Measure:80.37619
>>>>((61.5 55 48 47.5 47.5 47.5) (38 37 35 34.5 32.5 32.5 32) (30 29
28 27 27 26 25.5 22.5) (20.5 19 18 17.5 17)
(13.5 13 11.5 9.5 8.5 7 4)) --Dissimilarity Measure:78.55476
>>>>((61.5 55 48 47.5 47.5 47.5) (38 37 35 34.5 32.5 32.5 32) (30 29
28 27 27 26 25.5) (22.5 20.5 19 18 17.5 17)
(13.5 13 11.5 9.5 8.5 7 4)) --Dissimilarity Measure:78.571434
>>>>((61.5 55 48 47.5 47.5 47.5) (38 37 35 34.5 32.5 32.5 32) (30 29
28 27 27 26 25.5) (22.5 20.5 19 18 17.5 17)
(13.5 13 11.5 9.5 8.5 7 4)) --Dissimilarity Measure:78.571434

Compare this to case 1--see that we now converged to an entirely
different cluster just because we started from new centers


Note
* that the lowest dissimilarity attained depends on the original
cluster centers. This is a consequence of the fact that K-means is
a greedy algorithm and is not finding clusters with globally
lowest cluster dissimilarities.


* It is nice to see that the clusters found in case 1' are better
(according to the dissimilarity metric) than those found in case 1
(because this means that giving more As is in fact a better
idea according to k-means ;-)


case 2': repeat case 2 with different centers

USER(209): (k-means mlist-r 4 :key #'mark-val)

>>>>((35) (22.5) (18) (9.5))
>>>>((61.5 55 48 47.5 47.5 47.5 38 37 35 34.5 32.5 32.5 32 30 29) (28
27 27 26 25.5 22.5 20.5) (19 18 17.5 17)
(13.5 13 11.5 9.5 8.5 7 4))
>>>>((61.5 55 48 47.5 47.5 47.5 38 37 35 34.5) (32.5 32.5 32 30 29 28
27 27 26 25.5 22.5) (20.5 19 18 17.5 17)
(13.5 13 11.5 9.5 8.5 7 4))
>>>>((61.5 55 48 47.5 47.5 47.5 38 37) (35 34.5 32.5 32.5 32 30 29 28
27 27 26 25.5) (22.5 20.5 19 18 17.5 17)
(13.5 13 11.5 9.5 8.5 7 4))
>>>>((61.5 55 48 47.5 47.5 47.5) (38 37 35 34.5 32.5 32.5 32 30 29 28
27 27 26 25.5) (22.5 20.5 19 18 17.5 17)
(13.5 13 11.5 9.5 8.5 7 4))
>>>>((61.5 55 48 47.5 47.5 47.5) (38 37 35 34.5 32.5 32.5 32 30 29 28
27 27 26 25.5) (22.5 20.5 19 18 17.5 17)
(13.5 13 11.5 9.5 8.5 7 4))

;;interesting how going from 5 to four just merges old B and C...


case 3': repeat case 3 with different centers (we just removed 61.5)

USER(211): (k-means mlist-r-2 4 :key #'mark-val)

>>>>((35) (30) (22.5) (9.5))
>>>>((55 48 47.5 47.5 47.5 38 37 35 34.5 32.5 32.5) (32 30 29 28 27
27) (26 25.5 22.5 20.5 19 18 17.5 17)
(13.5 13 11.5 9.5 8.5 7 4))
>>>>((55 48 47.5 47.5 47.5 38 37) (35 34.5 32.5 32.5 32 30 29 28 27 27
26 25.5) (22.5 20.5 19 18 17.5 17)
(13.5 13 11.5 9.5 8.5 7 4))
>>>>((55 48 47.5 47.5 47.5 38) (37 35 34.5 32.5 32.5 32 30 29 28 27 27
26 25.5) (22.5 20.5 19 18 17.5 17)
(13.5 13 11.5 9.5 8.5 7 4))
>>>>((55 48 47.5 47.5 47.5) (38 37 35 34.5 32.5 32.5 32 30 29 28 27 27
26 25.5) (22.5 20.5 19 18 17.5 17)
(13.5 13 11.5 9.5 8.5 7 4))
>>>>((55 48 47.5 47.5 47.5) (38 37 35 34.5 32.5 32.5 32 30 29 28 27 27
26 25.5) (22.5 20.5 19 18 17.5 17)
(13.5 13 11.5 9.5 8.5 7 4))

Notice the non-local and somewhat non-intuitive effect on the
clusters.. A cutoff still remains 47.5, but B now got chopped up..


You would have thought more people will get better grades if the
highest guy gets thrown out... and yet...


In summary, you wouldn't trust your grades to K-means algorithm based
clustering because:


1. its clusters are sensitive to initial centers

2. its clusters are sensitive to outliers

3. deletion of elements might have non local effects on its clusters


I know I know. You see this whole thing as a joke in bad taste....

Rao
=============

Saturday, March 3, 2007

Pattern for Reading Web Content

Here's an interesting article on the Pattern for reading Web content, which tracks the movement of eye while viewing Web content.

http://www.useit.com/alertbox/reading_pattern.html

Friday, March 2, 2007

Clustering readings..

For next week's discussion on clustering, the recommended reading has
been changed ti chapters 16 and 17 from the IR book. Please read 16
for Tuesday and 17 for Thursday.


Rao

Class survey results (summary as sent by Qihong Shao)

Folks

Here are the results of the survey (thanks to Qihong for converting
the hardcopy datasheets into this). The percentages are color-coded--purple
for cse494; yellow for 598 and some sort of red-magenta for the totals.

Other than the comment regarding slides (which is legitimate.. I often
wind up re-ordering and adding slides after class to reflect things that happened in the class),
I didn't see any big red-lights. If you do, feel free to alert me either directly or anonymously

regards
Rao


Sheet1

    A B C D E F
    1 CSE 494/598 Spring 2007 UG UG % Grad G% Total %
    2 Class Survey
    3 Please Circle the appropriate choice for each questions
    4
    5 ABOUT YOU
    6 Qn0 I am taking the course as
    7 CSE494 2 11.1 0 11.1
    8 CSE598 0 16 88.9 88.9
    9
    10 QN1 The pace of the class/lectures is
    11 somewhat slow
    12 too slow
    13 somewhat fast 3 18.75 16.7
    14 too fast 0
    15 just right 2 100 13 81.25 83.3
    16
    17 QN2 The lecture(s) that you liked most until now (you can circle multiple)
    18 The intro lectures 1 3 2.8
    19 The lecture on latent semantic indexing 7 23.3 20
    20 The lectures on vector similarity 1 20 6 20 20
    21 The lectures on social networks 2 40 9 30 31.4
    22 The lectures on link analysis 2 40 7 23.3 25.7
    23 None of them
    24 0 0 0
    25 0 0 0
    26 QN3 The lecture(s) that you liked *least* until now (you can circle multiple) 0 0 0
    27 The intro lectures 0 2 18 15.4
    28 The lecture on latent semantic indexing 2 100 4 36 46.1
    29 The lectures on vector similarity 1 9 7.7
    30 The lectures on social networks 3 27 23
    31 The lectures on link analysis 1 9 7.7
    32 All of them
    33
    34 0 0 0
    35 QN4 Which type of lectures did you follow better 0 0 0
    36 The lectures done with slides 0 4 25 22.2
    37 The lectures done with white-board 0 2 12.5 11.1
    38 Those with a mix 2 100 10 62.5 66.7
    39 0 0 0
    40 QN5 The lecture style
    41 Keeps you engaged 2 100 9 56.25 61.1
    42 overwhelms you 1 6.25 5.6
    43 reasonable 6 37.5 33.3
    44 Conducive to a postprandial snooze
    45 0 0 0
    46 QN6 The class is 0 0 0
    47 Too much "Teacher Talking" 0 0
    48 Sufficiently interactive 2 100 16 100 100
    49 Too interactive 0
    50 0 0 0
    51 QN7 The material (as of now) is 0 0 0
    52 Exciting 0 13 81.25 72.2
    53 Boring
    54 Okay 2 100 3 18.75 27.8
    56 0 0 0
    57 The relation between lectures and readings on the readings list is
    58 There is barely any relation
    59 There is reasonable relation 2 100 7 43.75 50
    60 Very well related 9 56.25 50
    61 0 0 0
    62 QN8 The presentation level 0 0 0
    63 Goes right over my head 1 6.25 5.6
    64 Spoonfeeds me by repeating every little thing 0
    65 Just right for me 2 15 93.75 94.4
    66 0 0 0
    67 QN9 Most of the the extra stuff sent on class e-mail list and blog is, for the most part, 0 0 0
    68 lapped-up by me 2 100 12 75 77.8
    69 ignored by me 4 25 22.2
    70 What extra stuff? 0 0 0
    71 0 0 0
    72 QN10 I am able to get help when needed from the Instructor 0 0 0
    73 Strongly agree 1 50 8 50 50
    74 Okay 1 50 8 50 50
    76 Not really.. 0
    77 0 0 0
    78 QN11 I am able to get help when needed from the TA 0 0 0
    79 Strongly agree 2 100 9 56.25 61.1
    80 Okay 6 37.5 33.3
    81 Not really.. 0 1 6.25 5.6
    82 0 0 0
    83 QN12 The expectations/demands of the course/instructor are 0 0 0
    84 Unreasonably high 1 6.7 5.9
    85 Unreasonably low 1 6.7 5.9
    86 Challenging but reasonable 2 100 13 86.7 82.4
    87 0 0 0
    88 QN13 The lecture material is 0 0 0
    89 Too depth oriented 0
    90 Too breadth oriented 4 25 22.2
    91 Just right 2 100 12 75 77.8
    98 0 0 0
    99 QN14 The lectures are 0 0 0
    100 Too much intuition and too little formal detail 1 50 2 12.5 16.7
    101 Too much formal detail, too little intuition 1 6.25 5.6
    102 Balanced between intuition and detail 1 50 13 81.25 77.8
    103 0 0 0
    104 QN15 In your opinion there should be 0 0
    105 More frequent homeworks 1 6.25 5.6
    106 Less frequent homeworks 6 37.5 33.3
    107 Current schedule is fine 2 100 9 56.25 61.1
    108 0 0 0
    109 QN16 In your opinion there should be (multiple answers okay) 0 0 0
    110 There should be more frequent projects 0
    111 There should be less frequent projects 2 100 6 37.5 44.4
    112 The projects should be more challenging 6 37.5 33.3
    113 The projects should be way less challenging 1 6.25 5.56
    114 They are just fine 0 3 18.75 16.7
    115 0 0 0
    116 QN16 The project are 0 0 0
    117 Insultingly easy 3 18.75 16.7
    118 Too hard 2 12.5 11.1
    119 Just right 2 100 11 68.75 72.2
    120
    121 QN17 The homeworks are 0 0 0
    122 Insultingly easy 0 0 0
    123 Too hard 3 18.75 16.7
    124 Just right 2 100 13 81.25 83.3
    125
    126 QN18 Your stress/anxiety level about this course relative to your other courses is 0 0 0
    127 Very high 1 50 10 62.5 61.1
    128 Very low 0
    129 About the same 1 50 6 37.5 38.9
    130 0 0 0
    131 QN19 The discussion is 0 0 0
    132 Very helpful 2 100 7 63.7 69.2
    133 Normal 4 36.3 31.1
    134 useless
    135
    136 If you have additional comment/feedback, either write below or send a mail
    137 using http://rakaposhi.eas.asu.edu/cgi-bin/mail?rao

Comments:

  • This discussion is very helpful, I like how the class consists of lectures and discussions. It allows the class to read papers and ask questions in the discussion
  • Could you please keep the content/order of the slides more consistant? I found the slides are changing from time to time and I spent quite some time to figure out the differences between the latest version and the one I have printed out and read.