In the EUTPF 2022 group, there was participants.filter(x => x.program = "EUTPF2022").length participants, participants.filter(x.pro
Doing so
Because we're looking for a robust evaluation, we look to evaluate training for good on multiple measures to make sure that it is going well:
- Criteria based importance adjusted placements
- Relative utility importance adjusted placements
- Criteria based importance adjusted impact moments
- Relative utility importance adjusted impact moments
- Discount Impacted Adjusted Peak Years (DIPY)
- Qualitative flags of effectiveness
Questions here include:
- Did the fellows feel like they learnt things from the fellowship?
- Did they come up with good connections?
```
Notes:
Will mostly come from post survey data.
Methods of determining causality without randomness:
- Asking participants
- Look at syllibus, and then try to determine how much of that syllibus is actually used in their job?
- Try and determine whether the intermediate outcomes occurred along the theory of change.
- Identify problems against causation (such as identifying baseline differences)
```