Michael Mächtel 8 年前
父节点
当前提交
cb03c44a82
共有 10 个文件被更改,包括 1252 次插入0 次删除
  1. 20
    0
      files/hw5.txt
  2. 30
    0
      hw5/README.md
  3. 22
    0
      hw5/simu1/QUESTIONS.md
  4. 129
    0
      hw5/simu1/README-scheduler.md
  5. 155
    0
      hw5/simu1/scheduler.py
  6. 27
    0
      hw5/simu2/QUESTIONS.md
  7. 184
    0
      hw5/simu2/README-mlfq.md
  8. 338
    0
      hw5/simu2/mlfq.py
  9. 243
    0
      hw5/task1/README.md
  10. 104
    0
      hw5/task1/tests/output.bats

+ 20
- 0
files/hw5.txt 查看文件

@@ -0,0 +1,20 @@
1
+./hw5/README.md
2
+
3
+./hw5/task1/Cargo.lock
4
+./hw5/task1/Cargo.toml
5
+./hw5/task1/src/main.rs
6
+./hw5/task1/src/zombie/mod.rs
7
+./hw5/task1/src/child/mod.rs
8
+./hw5/task1/src/child/pstree.rs
9
+./hw5/task1/src/unit_tests.rs
10
+./hw5/task1/tests/output.bats
11
+
12
+./hw5/simu1/ANSWERS.md
13
+./hw5/simu1/QUESTIONS.md
14
+./hw5/simu1/README-scheduler.md
15
+./hw5/simu1/scheduler.py
16
+
17
+./hw5/simu2/ANSWERS.md
18
+./hw5/simu2/QUESTIONS.md
19
+./hw5/simu2/README-mlfq.md
20
+./hw5/simu2/mlfq.py

+ 30
- 0
hw5/README.md 查看文件

@@ -0,0 +1,30 @@
1
+# hw5
2
+
3
+## Tasks
4
+
5
+To fulfill **hw5** you have to solve:
6
+
7
+- task1
8
+- simu1
9
+- simu2
10
+
11
+## Files
12
+
13
+You find already or reuse some files for rust tasks. Please remember to use
14
+cargo to create the relevant projects for each task.
15
+
16
+## Pull-Reuest
17
+
18
+Please merge any accepted reviews into your branch. If you are ready with the
19
+homework, all tests run, please create a pull request named **hw5**.
20
+
21
+## Credits for hw5
22
+
23
+| Task     | max. Credits | Comment |
24
+| -------- | ------------ | ------- |
25
+| task1    | 1.5          |         |
26
+| task2    | 0.5          |         |
27
+| simu1    | 1            |         |
28
+| simu2    | 1            |         |
29
+| Deadline | +1           |         |
30
+| =        | 5            |         |

+ 22
- 0
hw5/simu1/QUESTIONS.md 查看文件

@@ -0,0 +1,22 @@
1
+# Questions 7-Scheduler-Intro
2
+
3
+This program, **scheduler.py**, allows you to see how different schedulers
4
+perform under scheduling metrics such as response time, turnaround time, and
5
+total wait time. See the README for details.
6
+
7
+## Questions
8
+
9
+1. Compute the average response time and average turnaround time when running
10
+   three jobs of length 200 with the SJF and FIFO schedulers.
11
+1. Now do the same but with jobs of different lengths: 300, 200, and 100.
12
+1. Now do the same (1.+2.), but also with the RR scheduler and a time-slice of
13
+   1.
14
+1. For what types of workloads does SJF deliver the same turnaround times as
15
+   FIFO?
16
+1. For what types of workloads and quantum lengths does SJF deliver the same
17
+   response times as RR?
18
+1. What happens to response time with SJF as job lengths increase? Can you use
19
+   the simulator to demonstrate the trend?
20
+1. What happens to response time with RR as quantum lengths increase? Can you
21
+   write an equation that gives the average worst-case response time, given N
22
+   jobs?

+ 129
- 0
hw5/simu1/README-scheduler.md 查看文件

@@ -0,0 +1,129 @@
1
+# README Scheduler
2
+
3
+This program, **scheduler.py**, allows you to see how different schedulers
4
+perform under scheduling metrics such as response time, turnaround time, and
5
+total wait time. Three schedulers are "implemented": FIFO, SJF, and RR.
6
+
7
+There are two steps to running the program.
8
+
9
+First, run without the -c flag: this shows you what problem to solve without
10
+revealing the answers. For example, if you want to compute response, turnaround,
11
+and wait for three jobs using the FIFO policy, run this:
12
+
13
+```text
14
+  ./scheduler.py -p FIFO -j 3 -s 100
15
+```
16
+
17
+If that doesn't work, try this:
18
+
19
+```text
20
+  python ./scheduler.py -p FIFO -j 3 -s 100
21
+```
22
+
23
+This specifies the FIFO policy with three jobs, and, importantly, a specific
24
+random seed of 100. If you want to see the solution for this exact problem, you
25
+have to specify this exact same random seed again. Let's run it and see what
26
+happens. This is what you should see:
27
+
28
+```text
29
+prompt> ./scheduler.py -p FIFO -j 3 -s 100
30
+ARG policy FIFO
31
+ARG jobs 3
32
+ARG maxlen 10
33
+ARG seed 100
34
+
35
+Here is the job list, with the run time of each job:
36
+  Job 0 (length = 1)
37
+  Job 1 (length = 4)
38
+  Job 2 (length = 7)
39
+
40
+Compute the turnaround time, response time, and wait time for each job.  When
41
+you are done, run this program again, with the same arguments, but with -c,
42
+which will thus provide you with the answers. You can use -s <somenumber> or
43
+your own job list (-l 10,15,20 for example) to generate different problems for
44
+yourself.
45
+```
46
+
47
+As you can see from this example, three jobs are generated: job 0 of length 1,
48
+job 1 of length 4, and job 2 of length 7. As the program states, you can now use
49
+this to compute some statistics and see if you have a grip on the basic
50
+concepts.
51
+
52
+Once you are done, you can use the same program to "solve" the problem and see
53
+if you did your work correctly. To do so, use the "-c" flag. The output:
54
+
55
+```text
56
+prompt> ./scheduler.py -p FIFO -j 3 -s 100 -c
57
+ARG policy FIFO
58
+ARG jobs 3
59
+ARG maxlen 10
60
+ARG seed 100
61
+
62
+Here is the job list, with the run time of each job:
63
+  Job 0 (length = 1)
64
+  Job 1 (length = 4)
65
+  Job 2 (length = 7)
66
+
67
+** Solutions **
68
+
69
+Execution trace:
70
+  [time   0] Run job 0 for 1.00 secs (DONE)
71
+  [time   1] Run job 1 for 4.00 secs (DONE)
72
+  [time   5] Run job 2 for 7.00 secs (DONE)
73
+
74
+Final statistics:
75
+  Job   0 -- Response: 0.00  Turnaround 1.00  Wait 0.00
76
+  Job   1 -- Response: 1.00  Turnaround 5.00  Wait 1.00
77
+  Job   2 -- Response: 5.00  Turnaround 12.00  Wait 5.00
78
+
79
+  Average -- Response: 2.00  Turnaround 6.00  Wait 2.00
80
+```
81
+
82
+As you can see from the figure, the -c flag shows you what happened. Job 0 ran
83
+first for 1 second, Job 1 ran second for 4, and then Job 2 ran for 7 seconds.
84
+Not too hard; it is FIFO, after all! The execution trace shows these results.
85
+
86
+The final statistics are useful too: they compute the "response time" (the time
87
+a job spends waiting after arrival before first running), the "turnaround time"
88
+(the time it took to complete the job since first arrival), and the total "wait
89
+time" (any time spent ready but not running). The stats are shown per job and
90
+then as an average across all jobs. Of course, you should have computed these
91
+things all before running with the "-c" flag!
92
+
93
+If you want to try the same type of problem but with different inputs, try
94
+changing the number of jobs or the random seed or both. Different random seeds
95
+basically give you a way to generate an infinite number of different problems
96
+for yourself, and the "-c" flag lets you check your own work. Keep doing this
97
+until you feel like you really understand the concepts.
98
+
99
+One other useful flag is "-l" (that's a lower-case L), which lets you specify
100
+the exact jobs you wish to see scheduled. For example, if you want to find out
101
+how SJF would perform with three jobs of lengths 5, 10, and 15, you can run:
102
+
103
+```text
104
+prompt> ./scheduler.py -p SJF -l 5,10,15
105
+ARG policy SJF
106
+ARG jlist 5,10,15
107
+
108
+Here is the job list, with the run time of each job:
109
+  Job 0 (length = 5.0)
110
+  Job 1 (length = 10.0)
111
+  Job 2 (length = 15.0)
112
+...
113
+```
114
+
115
+And then you can use -c to solve it again. Note that when you specify the exact
116
+jobs, there is no need to specify a random seed or the number of jobs: the jobs
117
+lengths are taken from your comma-separated list.
118
+
119
+Of course, more interesting things happen when you use SJF (shortest-job first)
120
+or even RR (round robin) schedulers. Try them and see!
121
+
122
+And you can always run
123
+
124
+```text
125
+  ./scheduler.py -h
126
+```
127
+
128
+to get a complete list of flags and options (including options such as setting
129
+the time quantum for the RR scheduler).

+ 155
- 0
hw5/simu1/scheduler.py 查看文件

@@ -0,0 +1,155 @@
1
+#! /usr/bin/env python
2
+
3
+import sys
4
+from optparse import OptionParser
5
+import random
6
+
7
+parser = OptionParser()
8
+parser.add_option("-s", "--seed", default=0, help="the random seed", 
9
+                  action="store", type="int", dest="seed")
10
+parser.add_option("-j", "--jobs", default=3, help="number of jobs in the system",
11
+                  action="store", type="int", dest="jobs")
12
+parser.add_option("-l", "--jlist", default="", help="instead of random jobs, provide a comma-separated list of run times",
13
+                  action="store", type="string", dest="jlist")
14
+parser.add_option("-m", "--maxlen", default=10, help="max length of job",
15
+                  action="store", type="int", dest="maxlen")
16
+parser.add_option("-p", "--policy", default="FIFO", help="sched policy to use: SJF, FIFO, RR",
17
+                  action="store", type="string", dest="policy")
18
+parser.add_option("-q", "--quantum", help="length of time slice for RR policy", default=1, 
19
+                  action="store", type="int", dest="quantum")
20
+parser.add_option("-c", help="compute answers for me", action="store_true", default=False, dest="solve")
21
+
22
+(options, args) = parser.parse_args()
23
+
24
+random.seed(options.seed)
25
+
26
+print 'ARG policy', options.policy
27
+if options.jlist == '':
28
+    print 'ARG jobs', options.jobs
29
+    print 'ARG maxlen', options.maxlen
30
+    print 'ARG seed', options.seed
31
+else:
32
+    print 'ARG jlist', options.jlist
33
+
34
+print ''
35
+
36
+print 'Here is the job list, with the run time of each job: '
37
+
38
+import operator
39
+
40
+joblist = []
41
+if options.jlist == '':
42
+    for jobnum in range(0,options.jobs):
43
+        runtime = int(options.maxlen * random.random()) + 1
44
+        joblist.append([jobnum, runtime])
45
+        print '  Job', jobnum, '( length = ' + str(runtime) + ' )'
46
+else:
47
+    jobnum = 0
48
+    for runtime in options.jlist.split(','):
49
+        joblist.append([jobnum, float(runtime)])
50
+        jobnum += 1
51
+    for job in joblist:
52
+        print '  Job', job[0], '( length = ' + str(job[1]) + ' )'
53
+print '\n'
54
+
55
+if options.solve == True:
56
+    print '** Solutions **\n'
57
+    if options.policy == 'SJF':
58
+        joblist = sorted(joblist, key=operator.itemgetter(1))
59
+        options.policy = 'FIFO'
60
+    
61
+    if options.policy == 'FIFO':
62
+        thetime = 0
63
+        print 'Execution trace:'
64
+        for job in joblist:
65
+            print '  [ time %3d ] Run job %d for %.2f secs ( DONE at %.2f )' % (thetime, job[0], job[1], thetime + job[1])
66
+            thetime += job[1]
67
+
68
+        print '\nFinal statistics:'
69
+        t     = 0.0
70
+        count = 0
71
+        turnaroundSum = 0.0
72
+        waitSum       = 0.0
73
+        responseSum   = 0.0
74
+        for tmp in joblist:
75
+            jobnum  = tmp[0]
76
+            runtime = tmp[1]
77
+            
78
+            response   = t
79
+            turnaround = t + runtime
80
+            wait       = t
81
+            print '  Job %3d -- Response: %3.2f  Turnaround %3.2f  Wait %3.2f' % (jobnum, response, turnaround, wait)
82
+            responseSum   += response
83
+            turnaroundSum += turnaround
84
+            waitSum       += wait
85
+            t += runtime
86
+            count = count + 1
87
+        print '\n  Average -- Response: %3.2f  Turnaround %3.2f  Wait %3.2f\n' % (responseSum/count, turnaroundSum/count, waitSum/count)
88
+                     
89
+    if options.policy == 'RR':
90
+        print 'Execution trace:'
91
+        turnaround = {}
92
+        response = {}
93
+        lastran = {}
94
+        wait = {}
95
+        quantum  = float(options.quantum)
96
+        jobcount = len(joblist)
97
+        for i in range(0,jobcount):
98
+            lastran[i] = 0.0
99
+            wait[i] = 0.0
100
+            turnaround[i] = 0.0
101
+            response[i] = -1
102
+
103
+        runlist = []
104
+        for e in joblist:
105
+            runlist.append(e)
106
+
107
+        thetime  = 0.0
108
+        while jobcount > 0:
109
+            # print '%d jobs remaining' % jobcount
110
+            job = runlist.pop(0)
111
+            jobnum  = job[0]
112
+            runtime = float(job[1])
113
+            if response[jobnum] == -1:
114
+                response[jobnum] = thetime
115
+            currwait = thetime - lastran[jobnum]
116
+            wait[jobnum] += currwait
117
+            if runtime > quantum:
118
+                runtime -= quantum
119
+                ranfor = quantum
120
+                print '  [ time %3d ] Run job %3d for %.2f secs' % (thetime, jobnum, ranfor)
121
+                runlist.append([jobnum, runtime])
122
+            else:
123
+                ranfor = runtime;
124
+                print '  [ time %3d ] Run job %3d for %.2f secs ( DONE at %.2f )' % (thetime, jobnum, ranfor, thetime + ranfor)
125
+                turnaround[jobnum] = thetime + ranfor
126
+                jobcount -= 1
127
+            thetime += ranfor
128
+            lastran[jobnum] = thetime
129
+
130
+        print '\nFinal statistics:'
131
+        turnaroundSum = 0.0
132
+        waitSum       = 0.0
133
+        responseSum   = 0.0
134
+        for i in range(0,len(joblist)):
135
+            turnaroundSum += turnaround[i]
136
+            responseSum += response[i]
137
+            waitSum += wait[i]
138
+            print '  Job %3d -- Response: %3.2f  Turnaround %3.2f  Wait %3.2f' % (i, response[i], turnaround[i], wait[i])
139
+        count = len(joblist)
140
+        
141
+        print '\n  Average -- Response: %3.2f  Turnaround %3.2f  Wait %3.2f\n' % (responseSum/count, turnaroundSum/count, waitSum/count)
142
+
143
+    if options.policy != 'FIFO' and options.policy != 'SJF' and options.policy != 'RR': 
144
+        print 'Error: Policy', options.policy, 'is not available.'
145
+        sys.exit(0)
146
+else:
147
+    print 'Compute the turnaround time, response time, and wait time for each job.'
148
+    print 'When you are done, run this program again, with the same arguments,'
149
+    print 'but with -c, which will thus provide you with the answers. You can use'
150
+    print '-s <somenumber> or your own job list (-l 10,15,20 for example)'
151
+    print 'to generate different problems for yourself.'
152
+    print ''
153
+
154
+
155
+

+ 27
- 0
hw5/simu2/QUESTIONS.md 查看文件

@@ -0,0 +1,27 @@
1
+# Questions: 8-Scheduler-MLFQ
2
+
3
+This program, **mlfq.py**, allows you to see how the MLFQ scheduler presented in
4
+this chapter behaves. See the README for details.
5
+
6
+Run a few randomly-generated problems with just two jobs and two queues; compute
7
+the MLFQ execution trace for each. Make your life easier by limiting the length
8
+of each job and turning off I/Os.
9
+
10
+## Questions
11
+
12
+1. How would you run the scheduler to reproduce each of the examples in the
13
+   chapter? Give to each figure number of the PDF chapter the corresponding
14
+   simulator call.
15
+1. How would you configure the scheduler parameters to behave just like a
16
+   round-robin scheduler?
17
+1. Craft a workload with two jobs and scheduler parameters so that one job takes
18
+   advantage of the older Rules 4a and 4b (turned on with the -S flag) to game
19
+   the scheduler and obtain 99% of the CPU over a particular time interval.
20
+1. Given a system with a quantum length of 10ms in its highest queue, how often
21
+   would you have to boost jobs back to the highest priority level (with the -B
22
+   flag) in order to guarantee that a single long-running (and
23
+   potentially-starving) job gets at least 5% of the CPU?
24
+1. One question that arises in scheduling is which end of a queue to add a job
25
+   that just finished I/O; the -I flag changes this behavior for this scheduling
26
+   simulator. Play around with some workloads and see if you can see the effect
27
+   of this flag.

+ 184
- 0
hw5/simu2/README-mlfq.md 查看文件

@@ -0,0 +1,184 @@
1
+# README Scheduler: MLFQ
2
+
3
+This program, **mlfq.py**, allows you to see how the MLFQ scheduler presented in
4
+this chapter behaves. As before, you can use this to generate problems for
5
+yourself using random seeds, or use it to construct a carefully-designed
6
+experiment to see how MLFQ works under different circumstances. To run the
7
+program, type:
8
+
9
+```text
10
+prompt> ./mlfq.py
11
+```
12
+
13
+Use the help flag (-h) to see the options:
14
+
15
+```text
16
+Usage: mlfq.py [options]
17
+Options:
18
+  -h, --help            show this help message and exit
19
+  -s SEED, --seed=SEED  the random seed
20
+  -n NUMQUEUES, --numQueues=NUMQUEUES
21
+                        number of queues in MLFQ (if not using -Q)
22
+  -q QUANTUM, --quantum=QUANTUM
23
+                        length of time slice (if not using -Q)
24
+  -Q QUANTUMLIST, --quantumList=QUANTUMLIST
25
+                        length of time slice per queue level,
26
+                        specified as x,y,z,... where x is the
27
+                        quantum length for the highest-priority
28
+                        queue, y the next highest, and so forth
29
+  -j NUMJOBS, --numJobs=NUMJOBS
30
+                        number of jobs in the system
31
+  -m MAXLEN, --maxlen=MAXLEN
32
+                        max run-time of a job (if random)
33
+  -M MAXIO, --maxio=MAXIO
34
+                        max I/O frequency of a job (if random)
35
+  -B BOOST, --boost=BOOST
36
+                        how often to boost the priority of all
37
+                        jobs back to high priority (0 means never)
38
+  -i IOTIME, --iotime=IOTIME
39
+                        how long an I/O should last (fixed constant)
40
+  -S, --stay            reset and stay at same priority level
41
+                        when issuing I/O
42
+  -l JLIST, --jlist=JLIST
43
+                        a comma-separated list of jobs to run,
44
+                        in the form x1,y1,z1:x2,y2,z2:... where
45
+                        x is start time, y is run time, and z
46
+                        is how often the job issues an I/O request
47
+  -c                    compute answers for me
48
+```
49
+
50
+There are a few different ways to use the simulator. One way is to generate some
51
+random jobs and see if you can figure out how they will behave given the MLFQ
52
+scheduler. For example, if you wanted to create a randomly-generated three-job
53
+workload, you would simply type:
54
+
55
+```text
56
+prompt> ./mlfq.py -j 3
57
+```
58
+
59
+What you would then see is the specific problem definition:
60
+
61
+```text
62
+Here is the list of inputs:
63
+OPTIONS jobs 3
64
+OPTIONS queues 3
65
+OPTIONS quantum length for queue  2 is  10
66
+OPTIONS quantum length for queue  1 is  10
67
+OPTIONS quantum length for queue  0 is  10
68
+OPTIONS boost 0
69
+OPTIONS ioTime 0
70
+OPTIONS stayAfterIO False
71
+
72
+
73
+For each job, three defining characteristics are given:
74
+  startTime : at what time does the job enter the system
75
+  runTime   : the total CPU time needed by the job to finish
76
+  ioFreq    : every ioFreq time units, the job issues an I/O
77
+              (the I/O takes ioTime units to complete)
78
+
79
+Job List:
80
+  Job  0: startTime   0 - runTime  84 - ioFreq   7
81
+  Job  1: startTime   0 - runTime  42 - ioFreq   2
82
+  Job  2: startTime   0 - runTime  51 - ioFreq   4
83
+
84
+Compute the execution trace for the given workloads.
85
+If you would like, also compute the response and turnaround
86
+times for each of the jobs.
87
+
88
+Use the -c flag to get the exact results when you are finished.
89
+```
90
+
91
+This generates a random workload of three jobs (as specified), on the default
92
+number of queues with a number of default settings. If you run again with the
93
+solve flag on (-c), you'll see the same print out as above, plus the following:
94
+
95
+```text
96
+Execution Trace:
97
+
98
+[time 0] JOB BEGINS by JOB 0
99
+[time 0] JOB BEGINS by JOB 1
100
+[time 0] JOB BEGINS by JOB 2
101
+[time 0] Run JOB 0 at PRI 2 [TICKSLEFT 9 RUNTIME 84 TIMELEFT 83]
102
+[time 1] Run JOB 0 at PRI 2 [TICKSLEFT 8 RUNTIME 84 TIMELEFT 82]
103
+[time 2] Run JOB 0 at PRI 2 [TICKSLEFT 7 RUNTIME 84 TIMELEFT 81]
104
+[time 3] Run JOB 0 at PRI 2 [TICKSLEFT 6 RUNTIME 84 TIMELEFT 80]
105
+[time 4] Run JOB 0 at PRI 2 [TICKSLEFT 5 RUNTIME 84 TIMELEFT 79]
106
+[time 5] Run JOB 0 at PRI 2 [TICKSLEFT 4 RUNTIME 84 TIMELEFT 78]
107
+[time 6] Run JOB 0 at PRI 2 [TICKSLEFT 3 RUNTIME 84 TIMELEFT 77]
108
+[time 7] IO_START by JOB 0
109
+[time 7] Run JOB 1 at PRI 2 [TICKSLEFT 9 RUNTIME 42 TIMELEFT 41]
110
+[time 8] Run JOB 1 at PRI 2 [TICKSLEFT 8 RUNTIME 42 TIMELEFT 40]
111
+[time 9] IO_START by JOB 1
112
+
113
+...
114
+
115
+Final statistics:
116
+  Job  0: startTime   0 - response   0 - turnaround 175
117
+  Job  1: startTime   0 - response   7 - turnaround 191
118
+  Job  2: startTime   0 - response   9 - turnaround 168
119
+
120
+  Avg  2: startTime n/a - response 5.33 - turnaround 178.00
121
+```
122
+
123
+The trace shows exactly, on a millisecond-by-millisecond time scale, what the
124
+scheduler decided to do. In this example, it begins by running Job 0 for 7 ms
125
+until Job 0 issues an I/O; this is entirely predictable, as Job 0's I/O
126
+frequency is set to 7 ms, meaning that every 7 ms it runs, it will issue an I/O
127
+and wait for it to complete before continuing. At that point, the scheduler
128
+switches to Job 1, which only runs 2 ms before issuing an I/O. The scheduler
129
+prints the entire execution trace in this manner, and finally also computes the
130
+response and turnaround times for each job as well as an average.
131
+
132
+You can also control various other aspects of the simulation. For example, you
133
+can specify how many queues you'd like to have in the system (-n) and what the
134
+quantum length should be for all of those queues (-q); if you want even more
135
+control and varied quanta length per queue, you can instead specify the length
136
+of the quantum for each queue with -Q, e.g., -Q 10,20,30] simulates a scheduler
137
+with three queues, with the highest-priority queue having a 10-ms time slice,
138
+the next-highest a 20-ms time-slice, and the low-priority queue a 30-ms time
139
+slice.
140
+
141
+If you are randomly generating jobs, you can also control how long they might
142
+run for (-m), or how often they generate I/O (-M). If you, however, want more
143
+control over the exact characteristics of the jobs running in the system, you
144
+can use -l (lower-case L) or --jlist, which allows you to specify the exact set
145
+of jobs you wish to simulate. The list is of the form: x1,y1,z1:x2,y2,z2:...
146
+where x is the start time of the job, y is the run time (i.e., how much CPU time
147
+it needs), and z the I/O frequency (i.e., after running z ms, the job issues an
148
+I/O; if z is 0, no I/Os are issued).
149
+
150
+For example, if you wanted to recreate the example in Figure 8.3 you would
151
+specify a job list as follows:
152
+
153
+```text
154
+prompt> ./mlfq.py --jlist 0,180,0:100,20,0 -Q 10,10,10
155
+```
156
+
157
+Running the simulator in this way creates a three-level MLFQ, with each level
158
+having a 10-ms time slice. Two jobs are created: Job 0 which starts at time 0,
159
+runs for 180 ms total, and never issues an I/O; Job 1 starts at 100 ms, needs
160
+only 20 ms of CPU time to complete, and also never issues I/Os.
161
+
162
+Finally, there are three more parameters of interest. The -B flag, if set to a
163
+non-zero value, boosts all jobs to the highest-priority queue every N
164
+milliseconds, when invoked as such:
165
+
166
+```text
167
+  prompt> ./mlfq.py -B N
168
+```
169
+
170
+The scheduler uses this feature to avoid starvation as discussed in the chapter.
171
+However, it is off by default.
172
+
173
+The -S flag invokes older Rules 4a and 4b, which means that if a job issues an
174
+I/O before completing its time slice, it will return to that same priority queue
175
+when it resumes execution, with its full time-slice intact.  This enables gaming
176
+of the scheduler.
177
+
178
+Finally, you can easily change how long an I/O lasts by using the -i flag. By
179
+default in this simplistic model, each I/O takes a fixed amount of time of 5
180
+milliseconds or whatever you set it to with this flag.
181
+
182
+You can also play around with whether jobs that just complete an I/O are moved
183
+to the head of the queue they are in or to the back, with the -I flag. Check it
184
+out.

+ 338
- 0
hw5/simu2/mlfq.py 查看文件

@@ -0,0 +1,338 @@
1
+#! /usr/bin/env python
2
+
3
+import sys
4
+from optparse import OptionParser
5
+import random
6
+
7
+# finds the highest nonempty queue
8
+# -1 if they are all empty
9
+def FindQueue():
10
+    q = hiQueue
11
+    while q > 0:
12
+        if len(queue[q]) > 0:
13
+            return q
14
+        q -= 1
15
+    if len(queue[0]) > 0:
16
+        return 0
17
+    return -1
18
+
19
+def LowerQueue(currJob, currQueue, issuedIO):
20
+    if currQueue > 0:
21
+        # in this case, have to change the priority of the job
22
+        job[currJob]['currPri'] = currQueue - 1
23
+        if issuedIO == False:
24
+            queue[currQueue-1].append(currJob)
25
+        job[currJob]['ticksLeft'] = quantum[currQueue-1]
26
+    else:
27
+        if issuedIO == False:
28
+            queue[currQueue].append(currJob)
29
+        job[currJob]['ticksLeft'] = quantum[currQueue]
30
+
31
+def Abort(str):
32
+    sys.stderr.write(str + '\n')
33
+    exit(1)
34
+
35
+
36
+#
37
+# PARSE ARGUMENTS
38
+#
39
+
40
+parser = OptionParser()
41
+parser.add_option('-s', '--seed', help='the random seed', 
42
+                  default=0, action='store', type='int', dest='seed')
43
+parser.add_option('-n', '--numQueues',
44
+                  help='number of queues in MLFQ (if not using -Q)', 
45
+                  default=3, action='store', type='int', dest='numQueues')
46
+parser.add_option('-q', '--quantum', help='length of time slice (if not using -Q)',
47
+                  default=10, action='store', type='int', dest='quantum')
48
+parser.add_option('-Q', '--quantumList',
49
+                  help='length of time slice per queue level, specified as ' + \
50
+                  'x,y,z,... where x is the quantum length for the highest ' + \
51
+                  'priority queue, y the next highest, and so forth', 
52
+                  default='', action='store', type='string', dest='quantumList')
53
+parser.add_option('-j', '--numJobs', default=3, help='number of jobs in the system',
54
+                  action='store', type='int', dest='numJobs')
55
+parser.add_option('-m', '--maxlen', default=100, help='max run-time of a job ' +
56
+                  '(if randomly generating)', action='store', type='int',
57
+                  dest='maxlen')
58
+parser.add_option('-M', '--maxio', default=10,
59
+                  help='max I/O frequency of a job (if randomly generating)',
60
+                  action='store', type='int', dest='maxio')
61
+parser.add_option('-B', '--boost', default=0,
62
+                  help='how often to boost the priority of all jobs back to ' +
63
+                  'high priority', action='store', type='int', dest='boost')
64
+parser.add_option('-i', '--iotime', default=5,
65
+                  help='how long an I/O should last (fixed constant)',
66
+                  action='store', type='int', dest='ioTime')
67
+parser.add_option('-S', '--stay', default=False,
68
+                  help='reset and stay at same priority level when issuing I/O',
69
+                  action='store_true', dest='stay')
70
+parser.add_option('-I', '--iobump', default=False,
71
+                  help='if specified, jobs that finished I/O move immediately ' + \
72
+                  'to front of current queue',
73
+                  action='store_true', dest='iobump')
74
+parser.add_option('-l', '--jlist', default='',
75
+                  help='a comma-separated list of jobs to run, in the form ' + \
76
+                  'x1,y1,z1:x2,y2,z2:... where x is start time, y is run ' + \
77
+                  'time, and z is how often the job issues an I/O request',
78
+                  action='store', type='string', dest='jlist')
79
+parser.add_option('-c', help='compute answers for me', action='store_true',
80
+                  default=False, dest='solve')
81
+
82
+(options, args) = parser.parse_args()
83
+
84
+random.seed(options.seed)
85
+
86
+
87
+# MLFQ: How Many Queues
88
+numQueues = options.numQueues
89
+
90
+quantum = {}
91
+if options.quantumList != '':
92
+    # instead, extract number of queues and their time slic
93
+    quantumLengths = options.quantumList.split(',')
94
+    numQueues = len(quantumLengths)
95
+    qc = numQueues - 1
96
+    for i in range(numQueues):
97
+        quantum[qc] = int(quantumLengths[i])
98
+        qc -= 1
99
+else:
100
+    for i in range(numQueues):
101
+        quantum[i] = int(options.quantum)
102
+
103
+hiQueue = numQueues - 1
104
+
105
+# MLFQ: I/O Model
106
+# the time for each IO: not great to have a single fixed time but...
107
+ioTime = int(options.ioTime)
108
+
109
+# This tracks when IOs and other interrupts are complete
110
+ioDone = {}
111
+
112
+# This stores all info about the jobs
113
+job = {}
114
+
115
+# seed the random generator
116
+random.seed(options.seed)
117
+
118
+# jlist 'startTime,runTime,ioFreq:startTime,runTime,ioFreq:...'
119
+jobCnt = 0
120
+if options.jlist != '':
121
+    allJobs = options.jlist.split(':')
122
+    for j in allJobs:
123
+        jobInfo = j.split(',')
124
+        if len(jobInfo) != 3:
125
+            sys.stderr.write('Badly formatted job string. Should be x1,y1,z1:x2,y2,z2:...\n')
126
+            sys.stderr.write('where x is the startTime, y is the runTime, and z is the I/O frequency.\n')
127
+            exit(1)
128
+        assert(len(jobInfo) == 3)
129
+        startTime = int(jobInfo[0])
130
+        runTime   = int(jobInfo[1])
131
+        ioFreq    = int(jobInfo[2])
132
+        job[jobCnt] = {'currPri':hiQueue, 'ticksLeft':quantum[hiQueue], 'startTime':startTime,
133
+                       'runTime':runTime, 'timeLeft':runTime, 'ioFreq':ioFreq, 'doingIO':False,
134
+                       'firstRun':-1}
135
+        if startTime not in ioDone:
136
+            ioDone[startTime] = []
137
+        ioDone[startTime].append((jobCnt, 'JOB BEGINS'))
138
+        jobCnt += 1
139
+else:
140
+    # do something random
141
+    for j in range(options.numJobs):
142
+        startTime = 0
143
+        # runTime   = int(random.random() * options.maxlen)
144
+        # ioFreq    = int(random.random() * options.maxio)
145
+        runTime   = int(random.random() * (options.maxlen - 1) + 1)
146
+        ioFreq    = int(random.random() * (options.maxio - 1) + 1)
147
+        
148
+        job[jobCnt] = {'currPri':hiQueue, 'ticksLeft':quantum[hiQueue], 'startTime':startTime,
149
+                       'runTime':runTime, 'timeLeft':runTime, 'ioFreq':ioFreq, 'doingIO':False,
150
+                       'firstRun':-1}
151
+        if startTime not in ioDone:
152
+            ioDone[startTime] = []
153
+        ioDone[startTime].append((jobCnt, 'JOB BEGINS'))
154
+        jobCnt += 1
155
+
156
+
157
+numJobs = len(job)
158
+
159
+print 'Here is the list of inputs:'
160
+print 'OPTIONS jobs',            numJobs
161
+print 'OPTIONS queues',          numQueues
162
+for i in range(len(quantum)-1,-1,-1):
163
+    print 'OPTIONS quantum length for queue %2d is %3d' % (i, quantum[i])
164
+print 'OPTIONS boost',           options.boost
165
+print 'OPTIONS ioTime',          options.ioTime
166
+print 'OPTIONS stayAfterIO',     options.stay
167
+print 'OPTIONS iobump',          options.iobump
168
+
169
+print '\n'
170
+print 'For each job, three defining characteristics are given:'
171
+print '  startTime : at what time does the job enter the system'
172
+print '  runTime   : the total CPU time needed by the job to finish'
173
+print '  ioFreq    : every ioFreq time units, the job issues an I/O'
174
+print '              (the I/O takes ioTime units to complete)\n'
175
+
176
+print 'Job List:'
177
+for i in range(numJobs):
178
+    print '  Job %2d: startTime %3d - runTime %3d - ioFreq %3d' % (i, job[i]['startTime'],
179
+                                                                   job[i]['runTime'], job[i]['ioFreq'])
180
+print ''
181
+
182
+if options.solve == False:
183
+    print 'Compute the execution trace for the given workloads.'
184
+    print 'If you would like, also compute the response and turnaround'
185
+    print 'times for each of the jobs.'
186
+    print ''
187
+    print 'Use the -c flag to get the exact results when you are finished.\n'
188
+    exit(0)
189
+
190
+# initialize the MLFQ queues
191
+queue = {}
192
+for q in range(numQueues):
193
+    queue[q] = []
194
+
195
+# TIME IS CENTRAL
196
+currTime = 0
197
+
198
+# use these to know when we're finished
199
+totalJobs    = len(job)
200
+finishedJobs = 0
201
+
202
+print '\nExecution Trace:\n'
203
+
204
+while finishedJobs < totalJobs:
205
+    # find highest priority job
206
+    # run it until either
207
+    # (a) the job uses up its time quantum
208
+    # (b) the job performs an I/O
209
+
210
+    # check for priority boost
211
+    if options.boost > 0 and currTime != 0:
212
+        if currTime % options.boost == 0:
213
+            print '[ time %d ] BOOST ( every %d )' % (currTime, options.boost)
214
+            # remove all jobs from queues (except high queue)
215
+            for q in range(numQueues-1):
216
+                for j in queue[q]:
217
+                    if job[j]['doingIO'] == False:
218
+                        queue[hiQueue].append(j)
219
+                queue[q] = []
220
+            # print 'BOOST: QUEUES look like:', queue
221
+
222
+            # change priority to high priority
223
+            # reset number of ticks left for all jobs (XXX just for lower jobs?)
224
+            # add to highest run queue (if not doing I/O)
225
+            for j in range(numJobs):
226
+                # print '-> Boost %d (timeLeft %d)' % (j, job[j]['timeLeft'])
227
+                if job[j]['timeLeft'] > 0:
228
+                    # print '-> FinalBoost %d (timeLeft %d)' % (j, job[j]['timeLeft'])
229
+                    job[j]['currPri']   = hiQueue
230
+                    job[j]['ticksLeft'] = quantum[hiQueue]
231
+            # print 'BOOST END: QUEUES look like:', queue
232
+
233
+    # check for any I/Os done
234
+    if currTime in ioDone:
235
+        for (j, type) in ioDone[currTime]:
236
+            q = job[j]['currPri']
237
+            job[j]['doingIO'] = False
238
+            print '[ time %d ] %s by JOB %d' % (currTime, type, j)
239
+            if options.iobump == False or type == 'JOB BEGINS':
240
+                queue[q].append(j)
241
+            else:
242
+                queue[q].insert(0, j)
243
+
244
+    # now find the highest priority job
245
+    currQueue = FindQueue()
246
+    if currQueue == -1:
247
+        print '[ time %d ] IDLE' % (currTime)
248
+        currTime += 1
249
+        continue
250
+    #print 'FOUND QUEUE: %d' % currQueue
251
+    #print 'ALL QUEUES:', queue
252
+            
253
+    # there was at least one runnable job, and hence ...
254
+    currJob = queue[currQueue][0]
255
+    if job[currJob]['currPri'] != currQueue:
256
+        Abort('currPri[%d] does not match currQueue[%d]' % (job[currJob]['currPri'], currQueue))
257
+
258
+    job[currJob]['timeLeft']  -= 1
259
+    job[currJob]['ticksLeft'] -= 1
260
+
261
+    if job[currJob]['firstRun'] == -1:
262
+        job[currJob]['firstRun'] = currTime
263
+
264
+    runTime   = job[currJob]['runTime']
265
+    ioFreq    = job[currJob]['ioFreq']
266
+    ticksLeft = job[currJob]['ticksLeft']
267
+    timeLeft  = job[currJob]['timeLeft']
268
+
269
+    print '[ time %d ] Run JOB %d at PRIORITY %d [ TICKSLEFT %d RUNTIME %d TIMELEFT %d ]' % \
270
+          (currTime, currJob, currQueue, ticksLeft, runTime, timeLeft)
271
+
272
+    if timeLeft < 0:
273
+        Abort('Error: should never have less than 0 time left to run')
274
+
275
+
276
+    # UPDATE TIME
277
+    currTime += 1
278
+
279
+    # CHECK FOR JOB ENDING
280
+    if timeLeft == 0:
281
+        print '[ time %d ] FINISHED JOB %d' % (currTime, currJob)
282
+        finishedJobs += 1
283
+        job[currJob]['endTime'] = currTime
284
+        # print 'BEFORE POP', queue
285
+        done = queue[currQueue].pop(0)
286
+        # print 'AFTER POP', queue
287
+        assert(done == currJob)
288
+        continue
289
+
290
+    # CHECK FOR IO
291
+    issuedIO = False
292
+    if ioFreq > 0 and (((runTime - timeLeft) % ioFreq) == 0):
293
+        # time for an IO!
294
+        print '[ time %d ] IO_START by JOB %d' % (currTime, currJob)
295
+        issuedIO = True
296
+        desched = queue[currQueue].pop(0)
297
+        assert(desched == currJob)
298
+        job[currJob]['doingIO'] = True
299
+        # this does the bad rule -- reset your tick counter if you stay at the same level
300
+        if options.stay == True:
301
+            job[currJob]['ticksLeft'] = quantum[currQueue]
302
+        # add to IO Queue: but which queue?
303
+        futureTime = currTime + ioTime
304
+        if futureTime not in ioDone:
305
+            ioDone[futureTime] = []
306
+        print 'IO DONE'
307
+        ioDone[futureTime].append((currJob, 'IO_DONE'))
308
+        # print 'NEW IO EVENT at ', futureTime, ' is ', ioDone[futureTime]
309
+        
310
+    # CHECK FOR QUANTUM ENDING AT THIS LEVEL
311
+    if ticksLeft == 0:
312
+        # print '--> DESCHEDULE %d' % currJob
313
+        if issuedIO == False:
314
+            # print '--> BUT IO HAS NOT BEEN ISSUED (therefor pop from queue)'
315
+            desched = queue[currQueue].pop(0)
316
+        assert(desched == currJob)
317
+        # move down one queue! (unless lowest queue)
318
+        LowerQueue(currJob, currQueue, issuedIO)
319
+
320
+
321
+# print out statistics
322
+print ''
323
+print 'Final statistics:'
324
+responseSum   = 0
325
+turnaroundSum = 0
326
+for i in range(numJobs):
327
+    response   = job[i]['firstRun'] - job[i]['startTime']
328
+    turnaround = job[i]['endTime'] - job[i]['startTime']
329
+    print '  Job %2d: startTime %3d - response %3d - turnaround %3d' % (i, job[i]['startTime'],
330
+                                                                        response, turnaround)
331
+    responseSum   += response
332
+    turnaroundSum += turnaround
333
+
334
+print '\n  Avg %2d: startTime n/a - response %.2f - turnaround %.2f' % (i, 
335
+                                                                        float(responseSum)/numJobs,
336
+                                                                        float(turnaroundSum)/numJobs)
337
+
338
+print '\n'

+ 243
- 0
hw5/task1/README.md 查看文件

@@ -0,0 +1,243 @@
1
+# Homework hw5 task1
2
+
3
+- [1.1. Ziel](#11-ziel)
4
+- [1.2. Externe Crate nutzen](#12-externe-crate-nutzen)
5
+    - [1.2.1. Versionen der externen Crates](#121-versionen-der-externen-crates)
6
+- [1.3. Aufgaben](#13-aufgaben)
7
+    - [1.3.1. Optionale Parameter](#131-optionale-parameter)
8
+    - [1.3.2. Externe Crate in Dependencies eintragen](#132-externe-crate-in-dependencies-eintragen)
9
+    - [1.3.3. Die Funktion `pub fn run_zombie()`](#133-die-funktion-pub-fn-runzombie)
10
+    - [1.3.4. Die Funktion `pub fn run_childs(start_pid: i32, arg: &str) -> Result<(), String>`](#134-die-funktion-pub-fn-runchildsstartpid-i32-arg-str---result-string)
11
+- [1.4. Tests](#14-tests)
12
+- [1.5. Dokumentation](#15-dokumentation)
13
+- [1.6. Kontrolle Ihres Repositories](#16-kontrolle-ihres-repositories)
14
+
15
+## 1.1. Ziel
16
+
17
+Ziel dieser Aufgabe ist es, mittels des externen Crates *nix* einige systemnahe
18
+Funktionen zur Prozesserzeugung und -synchronisierung kennen zu lernen.
19
+
20
+Über optionale Aufrufparameter werden die verschiedenen Verhaltensweisen Ihres
21
+Programms getrickert.
22
+
23
+## 1.2. Externe Crate nutzen
24
+
25
+Um möglichst direkt die Systemfunktionen zu nutzen, stellt das *[nix Crate][]*
26
+eine einheitliche Schnittstelle dar. Insbesondere der Umgang mit der
27
+Fehlerbehandlung ist unter Rust und der nix Crate um einiges komfortabler und
28
+sicherer. Dazu stellt die nix Crate über den Result Typ die Informationen über
29
+den aufgerufenen Systemcall zur Verfügung. Rust Datentypen werden in der nix
30
+Crate überall dort benutzt wo dies Sinn macht. So werden z.B. Slices als
31
+Standard benutzt, wenn Bereiche eines Puffers weitergegeben werden müssen.
32
+Dieser Standard soll nun wiederum in C++ im C++17 Standard einfließen.
33
+
34
+In dieser Aufgabe interessieren wir uns vor allem für das *nix* Modul
35
+`nix::unistd` und `nix::sys::wait`. Darüber hinaus werden auch weitere Rust
36
+Standard-Library Methoden benutzt, sowie die externe Crate *procinfo*
37
+eingebunden.
38
+
39
+### 1.2.1. Versionen der externen Crates
40
+
41
+- *nix*: 0.9.0
42
+- *procinfo*: 0.4.2
43
+
44
+In Ihrer Lösung dürfen nur diese beiden externen Crates eingebunden werden.
45
+
46
+## 1.3. Aufgaben
47
+
48
+### 1.3.1. Optionale Parameter
49
+
50
+Ihr Programm wird sich entsprechend der übergebenen Optionen unterschiedlich
51
+verhalten:
52
+
53
+- ohne Parameter: Aufruf der Funktion `pub fn run_zombie()`
54
+- ein Parameter: Aufruf der Funktion `pub fn run_childs(start_pid: i32, arg:
55
+  &str) -> Result<(), String>`
56
+
57
+Details zu den einzelnen Funktionen erhalten Sie bei den entsprechenden Aufgaben
58
+dazu.
59
+
60
+### 1.3.2. Externe Crate in Dependencies eintragen
61
+
62
+- Benutzen Sie für die folgenden Aufgaben das *[nix Crate][]*.
63
+- Fügen Sie dazu den notwendigen Eintrag unter `[dependencies]` in
64
+  **Cargo.toml** hinzu und benutzen in Ihrem Root Modul `main.rs` die
65
+  entsprechende extern Anweisung.
66
+
67
+### 1.3.3. Die Funktion `pub fn run_zombie()`
68
+
69
+Wird das Programm ohne einen Parameter aufgerufen, erstellen wir als 'Belohnung'
70
+für diesen 'müden' Programmaufruf einen Zombie! Dies geschieht im Modul
71
+*zombie/mod.rs* über die dortige Funktion `pub fn run_zombie()`. Innerhalb
72
+dieser Funktion behandeln Sie evtl. auftretende Fehler direkt mit
73
+Programmabbruch. Dies gilt jedoch nur für diesen Aufgabenpunkt!
74
+
75
+Was ist ein Zombieprozess?
76
+
77
+>"Wenn ein Prozess einen neuen Prozess startet (mittels Forking), wird der alte
78
+>'Elternprozess' und der neue 'Kindprozess' genannt. Wenn der Kindprozess
79
+>beendet wird, kann der Elternprozess vom Betriebssystem erfragen, auf welche
80
+>Art der Kindprozess beendet wurde: erfolgreich, mit Fehler, abgestürzt,
81
+>abgebrochen, etc.
82
+>
83
+>Um diese Abfrage zu ermöglichen, bleibt ein Prozess, selbst nachdem er beendet
84
+>wurde, in der Prozesstabelle stehen, bis der Elternprozess diese Abfrage
85
+>durchführt – egal ob diese Information gebraucht wird oder nicht. Bis dahin hat
86
+>der Kindprozess den Zustand Zombie. In diesem Zustand belegt der Prozess selbst
87
+>keinen Arbeitsspeicher mehr (bis auf den platzmäßig vernachlässigbaren Eintrag
88
+>in der Prozesstabelle des Kernels) und verbraucht auch keine Rechenzeit, jedoch
89
+>behält er seine PID, die (noch) nicht für andere Prozesse wiederverwendet
90
+>werden kann." ([Quelle Wikipedia][])
91
+
92
+Die Funktion `run_zombie()` erstellen Sie im Modul *zombie/mod.rs*. Wenn Sie Ihr
93
+Programm ohne Parameter aufrufen, erhalten Sie bei korrekter Zombie
94
+'Generierung' folgende Ausgabe:
95
+
96
+```text
97
+PID   TTY      STAT   TIME COMMAND
98
+27524 pts/1    Ss     0:00 -bash
99
+27531 pts/1    S      0:00 zsh
100
+27935 pts/1    S+     0:00 cargo run
101
+27936 pts/1    S+     0:00 target/debug/task1
102
+27962 pts/1    Z+     0:00 [task1] \<defunct\>
103
+27963 pts/1    R+     0:00 ps t
104
+```
105
+
106
+Diese Ausgabe wird über das Programm **ps t** generiert, welches in Ihrem
107
+Programm dann geschickterweise aufgerufen wird, wenn Sie den im vorherigen Text
108
+beschriebenen Zustand durch Ihr Programm selbst hergestellt haben. Den
109
+Zombizustand bekommen Sie als Z dargestellt, siehe auch [man ps][].
110
+
111
+>Tipp: Um Ihr Programm einen Moment 'schlafen' zu lassen, stellt Ihnen Rust die
112
+>Funktion [thread::sleep][] zur Verfügung. Ein Prozess besteht aus mindestens
113
+>einem Thread (Main-Thread). Und da Sie keine weiteren Threads erstellen, lässt
114
+>diese Funktion Ihren Prozess (bestehend aus einem Main-Thread) schlafen.
115
+
116
+Um ein anderes Programm aus Ihrem Programm heraus starten zu lassen stellen
117
+Ihnen die *nix* Crate und die `std Bibliothek verschiedene Funktionen` bereit:
118
+
119
+- [Module nix::unistd][] der *nix* Crate: die Funktion execv(), execve() und
120
+   execvp()
121
+
122
+- [Module std::process][] der Standardbibliothek: Machen Sie sich mit der
123
+   Benutzung von [Module std::process][] grob vertraut, insbesondere die
124
+   Methoden des Typs [Command][]:
125
+
126
+  - new
127
+  - arg
128
+  - spawn
129
+  - output
130
+  - status
131
+
132
+Je nachdem für welche Bibliothek Sie sich entscheiden, experimentieren Sie zu
133
+Beginn ein wenig mit den Code Beispielen der Dokumentation.
134
+
135
+>Tipp: Das Command Interface der Standardbibliothek ist einfacher zu verwenden.
136
+
137
+### 1.3.4. Die Funktion `pub fn run_childs(start_pid: i32, arg: &str) -> Result<(), String>`
138
+
139
+Wird ein Parameter (`arg`) beim Aufruf Ihres Programms mit angegeben,
140
+spezifiziert dieser Parameter die Anzahl der zu erzeugenden Kindprozesse.
141
+Wichtig dabei ist, dass alle zu erstellenden Kindprozesse voneinander abhängen.
142
+Rufen Sie im letzten erstellten Kindprozess Ihre bereits erstellte `pstree()`
143
+Funktion mit der übergebenen PID des 1. Elternprozesses (`start_pid`) auf, so
144
+können Sie aufgrund der ausgegebenen Liste erkennen, ob alle Kindprozess
145
+voneinander abhängen. Ihr *pstree* Modul stellen Sie in *child/pstree.rs*
146
+bereit, da dieses nur im Modul *child/mod.rs* benutzt werden muss.
147
+
148
+> Die Pid aus dem **nix Crate** ist vom Typ `Pid`, welches ein Alias ist. Lesen
149
+> Sie dazu die Dokumentation des nix Crates und benutzen Sie eine geeignete Rust
150
+> Funktion (einer Trait), um diesen Typ als i32 an Ihre Funktion `run_childs()`
151
+> weiterzugeben. Sie müssen in der **pstree Funktion** dazu nichts anpassen!
152
+
153
+```text
154
+> ./target/debug/task1 4
155
+...
156
+task1(28207)---task1(28233)---task1(28234)---task1(28235)---task1(28236)
157
+...
158
+```
159
+
160
+Implementieren Sie die Funktion `pub fn run_childs(start_pid: i32, arg: &str) ->
161
+Result<(), String>` im Modul *child/mod.rs*. Alle dazu nötigen Hilfsfunktionen
162
+werden entweder in *child/mod.rs* oder entsprechenden Modulen im *child/*
163
+Verzeichnis zur Verfügung gestellt. Evtl. auftretende Fehler werden an das Root
164
+Modul (*main.rs*) zurückgegeben und dort behandelt. Bei dieser Teilaufgabe
165
+müssen alle auftretenden Fehler entsprechend an das Root-Modul zurückgegeben
166
+werden. Treten Fehler auf, so darf die Fehlermeldung, generiert im Root-Modul,
167
+dazu max. 1 Zeile lang sein und der Exit-Code des Programms muss '1' sein.
168
+
169
+Parsen Sie den übergebenen Parameter mit der `parse()` Funktion in einen `u8`
170
+Typ! Es ist wichtig, dass Sie im weiteren die Anzahl der Kindprozesse über eine
171
+`u8` Variable steuern!
172
+
173
+Jedes Child soll eine Ausgabe machen, sowie jeder Parent, wenn sich der Child
174
+beendet hat:
175
+
176
+- Child: hello, I am child (\<pid\>)
177
+- Eltern: I am \<pid\> and my child is \<child\>.  After I waited for
178
+  \<waitstatuspid\>, it sent me status \<status\>
179
+
180
+  - \<pid\> via getpid()
181
+  - \<child\> = child pid
182
+  - \<waitstatuspid\> = see Ok of waitpid()
183
+  - \<status\> = see Ok of waitpid()
184
+
185
+```text
186
+> ./target/debug/task1 4
187
+hello, I am child (pid:28233)
188
+hello, I am child (pid:28234)
189
+hello, I am child (pid:28235)
190
+hello, I am child (pid:28236)
191
+
192
+task1(28207)---task1(28233)---task1(28234)---task1(28235)---task1(28236)
193
+I am 28235 and my child is 28236.  After I waited for 28236, it sent me status 0
194
+I am 28234 and my child is 28235.  After I waited for 28235, it sent me status 0
195
+I am 28233 and my child is 28234.  After I waited for 28234, it sent me status 0
196
+I am 28207 and my child is 28233.  After I waited for 28233, it sent me status 0
197
+```
198
+
199
+>Leerzeile wichtig nach der Ausgabe aller Kinder!
200
+
201
+## 1.4. Tests
202
+
203
+Für das *tests/* Verzeichnis steht wieder eine *output.bats* Datei zur
204
+Verfügung. Ausserdem erstellen Sie bitte eigene Unit Tests in einer eigenen zu
205
+erstellenden *unit_tests.rs* Datei in Ihrem Crate Root Verzeichnis (*src/*).
206
+
207
+## 1.5. Dokumentation
208
+
209
+Erstellen Sie für alle Module und Funktionen eine kurze aber aussagekräftige
210
+Dokumentation, und vergessen Sie nicht wichtige Passagen auch im Code zu
211
+kommentieren. Als Tutoren sollte es uns möglich sein, schnell Ihre genialen
212
+Lösungen nachvollziehen zu können.
213
+
214
+## 1.6. Kontrolle Ihres Repositories
215
+
216
+Haben Sie die Aufgaben komplett bearbeitet, so sollten sich folgende Dateien in
217
+Ihrem HW (Homework) Verzeichnis befinden:
218
+
219
+```text
220
+.
221
+├── Cargo.lock
222
+├── Cargo.toml
223
+├── README.md
224
+├── src
225
+│   ├── child
226
+│   │   ├── mod.rs
227
+│   │   └── pstree.rs
228
+│   ├── main.rs
229
+│   └── zombie
230
+│       └── mod.rs
231
+└── tests
232
+    └── output.bats
233
+
234
+4 directories, 8 files
235
+```
236
+
237
+[nix Crate]: https://docs.rs/nix/0.8.1/nix/
238
+[Module nix::unistd]: https://docs.rs/nix/0.8.1/nix/unistd/index.html
239
+[Module std::process]: https://doc.rust-lang.org/std/process/
240
+[Command]: https://doc.rust-lang.org/std/process/struct.Command.html
241
+[Quelle Wikipedia]: https://de.wikipedia.org/wiki/Zombie-Prozess
242
+[thread::sleep]: https://doc.rust-lang.org/std/thread/fn.sleep.html
243
+[man ps]: http://man7.org/linux/man-pages/man1/ps.1.html

+ 104
- 0
hw5/task1/tests/output.bats 查看文件

@@ -0,0 +1,104 @@
1
+#!/usr/bin/env bats
2
+
3
+
4
+@test "task1: Check that we have a debug output" {
5
+    run stat "$BATS_TEST_DIRNAME/../target/debug/task1"
6
+    [ "$status" -eq 0 ]
7
+}
8
+
9
+# Check lines of output
10
+
11
+# wc output with white spaces is trimmed by xargs
12
+@test "task1: Output with Zombie must at least 4 Lines long" {
13
+    run bash -c "'$BATS_TEST_DIRNAME/../target/debug/task1' | wc -l | xargs"
14
+    [ "$output" -gt 4 ]
15
+
16
+}
17
+
18
+# wc output with white spaces is trimmed by xargs
19
+@test "task1: Output with to many paras must be exact 1 line long" {
20
+    run bash -c "'$BATS_TEST_DIRNAME/../target/debug/task1' 2 3 4 | wc -l | xargs"
21
+    [ "$output" = "1" ]
22
+
23
+}
24
+
25
+
26
+# wc output with white spaces is trimmed by xargs
27
+@test "task1: Output with wrong para must be exact 1 line long" {
28
+    run bash -c "'$BATS_TEST_DIRNAME/../target/debug/task1' y | wc -l | xargs"
29
+    [ "$output" = "1" ]
30
+}
31
+
32
+# wc output with white spaces is trimmed by xargs
33
+@test "task1: Output with wrong para must be exact 1 line long" {
34
+    run bash -c "'$BATS_TEST_DIRNAME/../target/debug/task1' -1 | wc -l | xargs"
35
+    [ "$output" = "1" ]
36
+}
37
+
38
+# wc output with white spaces is trimmed by xargs
39
+@test "task1: Output with para 0 must be exact 0 line long" {
40
+    run bash -c "'$BATS_TEST_DIRNAME/../target/debug/task1' 0 | wc -l | xargs"
41
+    [ "$output" = "0" ]
42
+}
43
+
44
+# wc output with white spaces is trimmed by xargs
45
+@test "task1: Output with para 256 must be exact 1 line long" {
46
+    run bash -c "'$BATS_TEST_DIRNAME/../target/debug/task1' 256 | wc -l | xargs"
47
+    [ "$output" = "1" ]
48
+}
49
+
50
+# wc output with white spaces is trimmed by xargs
51
+@test "task1: Output with para 1 must be exact 4 line long" {
52
+    run bash -c "'$BATS_TEST_DIRNAME/../target/debug/task1' 1 | wc -l | xargs"
53
+    [ "$output" = "4" ]
54
+}
55
+
56
+# wc output with white spaces is trimmed by xargs
57
+@test "task1: Output with para 16 must be exact 34 line long" {
58
+    run bash -c "'$BATS_TEST_DIRNAME/../target/debug/task1' 16 | wc -l | xargs"
59
+    [ "$output" = "34" ]
60
+}
61
+
62
+# wc output with white spaces is trimmed by xargs
63
+@test "task1: Output with para 255 must be exact 512 line long" {
64
+    run bash -c "'$BATS_TEST_DIRNAME/../target/debug/task1' 255 | wc -l | xargs"
65
+    [ "$output" = "512" ]
66
+}
67
+
68
+# Status checks
69
+@test "task1: Output with wrong CHILD_NUMBERS does not crash" {
70
+    run bash -c "'$BATS_TEST_DIRNAME/../target/debug/task1' 0 "
71
+    [ "$status" = 1 ]
72
+}
73
+
74
+@test "task1: Output with wrong CHILD_NUMBERS does not crash" {
75
+    run bash -c "'$BATS_TEST_DIRNAME/../target/debug/task1' 256 "
76
+    [ "$status" = 1 ]
77
+}
78
+
79
+@test "task1: Output with wrong PARAM does not crash" {
80
+    run bash -c "'$BATS_TEST_DIRNAME/../target/debug/task1' a "
81
+    [ "$status" = 1 ]
82
+}
83
+
84
+@test "task1: Output with to many para does not crash" {
85
+    run bash -c "'$BATS_TEST_DIRNAME/../target/debug/task1' 2 3 4 "
86
+    [ "$status" = 1 ]
87
+}
88
+
89
+@test "task1: Output with standard CHILD_NUMBERS exits with 0" {
90
+    run bash -c "'$BATS_TEST_DIRNAME/../target/debug/task1' 4 "
91
+    [ "$status" = 0 ]
92
+}
93
+
94
+@test "task1: Output with MIN CHILD_NUMBERS exits with 0" {
95
+    run bash -c "'$BATS_TEST_DIRNAME/../target/debug/task1' 1 "
96
+    [ "$status" = 0 ]
97
+}
98
+
99
+@test "task1: Output with MAX CHILD_NUMBERS exits with 0" {
100
+    run bash -c "'$BATS_TEST_DIRNAME/../target/debug/task1' 255 "
101
+    [ "$status" = 0 ]
102
+}
103
+
104
+

正在加载...
取消
保存