Category Archives: #ocTEL

Messaging (3.0) – a case study in “off the shelf” vs “in house” technology

A very quick post to say I have just been talking to staff about messaging 3.0, when we as an institution we are not even up to speed with support Facebook and Twitter (assuming they count as Messaging 2.0!;-)

Messaging is a great case in point in thinking about what tech to use.  In theory, its such a simple application – in practice there are so many issues, not limited to:

  • delivering the right basic functionality
  • getting user (student/staff) buy-in
  • getting people to use the same tech in the same way
  • tech usefulness vs usability
  • longevity
  • ownership
  • … and so much more

Its interesting to see that our VLE has built in messaging – and has done for (at least 10?) years. But very few people use it, it is not integrated into their daily routines, and it doesn’t work well on modern devices – lacking what now counts as basic features such as simple sharing of multi-media content, or mobile alerts.

Twitter seems like a nice alternative, but given they trashed the integration with our VLE at least once in the past (through dropping support for RSS access to twitter feeds) it will be interesting to see what kind of tech gets chosen in future.  As a relatively small organisation, its unlikely that our one institution would have much clout as a “strategic” business partner.

While supporting diversity of tech can be good, it can also be a pain to manage and keep up with.  It has been interesting to see how #ocTEL has done this over the last 6 weeks but would also be interesting to see if people appreciate this or think it increases the overhead of engaging in discussions.

Moving forward with social media

This is a very quick post, I promise, reflecting on the experience of leading a project looking ar the role and impact of social media and its value in learning and teaching. As well as describing the project, there is some reflection on project management – if nothing else, prompted by engagement with the #ocTEL Mooc.

The Project – Aims and Objectives

The aim of the project was to understand and bridge the divide between the virtual learning and social media. As such, the project could be seen as a more current, institution specific contribution to the going debate within Higher Education, going back to initial discussion of whether it has a positive or negative effective, and whether staff should stay away or refrain from using it for formal academic purposes.

The specific objectives of the project were to:

  1. listen to the student voice, to extend our understanding of students attitudes to and use of social media
  2. engage staff beyond the core, dual professionals to understands their attitudes to and use of social media
  3. investigate issues in scaling VLE-social media integration across the whole of the institution
  4. evaluate the effectiveness of the VLE-SM integration on student engagement

The Project – Evaluation of Outcomes

The objectives of listening to the student voice (1) and supporting staff (2) were well met by the project and supported the overall aim of bridging the VLE/social network divide. The objectives of considering scaling the integration (3) and evaluating the impact on student engagement (4) were less well achieved. This was in part due to changes in the project plan to address difficulties encountered including the ability of the proposed project team to fully engage, approval and recruitment of participants and re-scheduling of activities around key student milestones.

The Project Management – Approach

In general the project had a clear plan, with associated risk analysis and resource requirements. However, given the project was seen as an additional activity for most stakeholders, a soft approach was taken to project management, in terms of commitments of time, progress tracking and deadlines.

The result was a more collaborative/agile, rather than plan driven, approach involving a wide range of stakeholders including:

  • the central learning and teaching unit as sponsors
  • staff as potential users of social media
  • students potential users of social media in their learning
  • the VLE development team as providers of insight on user behaviour
  • on campus/online programme tutors
  • the Students Union as champions of student experience

The Project Management – Evalution

On reflection, the overall result of a softer approach to project management meant less clear buy-in from participants in changing personal, work allocation and organisational environments. This was not helped by the lack of clear project events (either formal or informal) to mark defined milestones, such as project inception or wrap-up.

Despite a clear sense of purpose among participants, a clearer, better articulated communication plan, including such key events, would help develop a joint sense of endeavor with better clarity of roles, interdependencies and expectation. As ever, this would impact the timing of activities and enable us to better balance patience, given the lower priority of this exploratory project, with the speed that is vital for success. A big lesson learned for next time.

The “Project” – Next Steps

While the explicit project described here has closed, on-going work is planned – which may or may not be formalised into a new project.  In (not necessarily equal) parts, it is hoped that this will include:

  • revisiting the evaluation of effectiveness described above (objective 4)
  • seeing how student and staff attitudes can impact on any new VLE design (objective 3)
  • wider dissemination of project outcomes in seminars, conferences and papers
  • continuing student surveys to understand the changing social media/network/messaging landscape
  • cross institutional staff survey on issues for social media and their relationships (e.g. privacy vs engagement)
  • data analysis to validate conceptualisation/classification of student attitudes and the features that categorise them, e.g. as separatists or integrationists

 

Peer support for E-assessment

Peer support for E-assessment

Ok – 30 mins, one learning situation, and some (more) thoughts to capture.  The “what” this time refers to the project proposal for a personal, extended case study mentioned at the end of the previous post.  Students need to identify online services they might wish to “mash up” as part of a problem based learning assessment during a Master’s level web services course.

This has run for five plus years and seen students proposing innovate applications such as location based social media (before foursquare or location enabled facebook), aggregation of travel planning services, and integration of healthy eating and shopping with intelligent fridges.  And while the assessment may be showing its age, it has still be relevant as more online, mobile services develop.

What – face to face

Students are not expected to completely develop a whole system, but are expected to use the application development project as a way of exploring web services technology and service oriented architectures. And the first step on the journey is to consider how access to a wide range of services could be used to generate useful applications that meet the need of all stakeholders, from users to project sponsors and service providers.

In a face to face teaching environment, ideas are discussed between students, proposed to tutors and critiqued.  This offers personalised, immediate feedback as part of a dialogue and at the start of the process.  An initial one page proposal is submitted to make a case which is also reviewed by tutors with suggestions for improvement.  Crucially, the proposal is created by self-selecting groups of students working together and collaborating in any way they wish.

Apart from meeting the assessment criteria (e.g. in identifying and describing stakeholders), key issues for the earlier, (almost) formative assessment are whether:

  • the proposed application will offer a good range of functionality
  • make use of a wide range of data that needs to be married up
  • deliver identified value to users

So in an elearning context, aggregating available ranges of books from multiple online vendors (who essentially use the same data and processes) is less rewarding, compared to a mashup which will source books to buy from new or used, or allow borrowing from a range of libraries.

What – online

The same assessment task is currently done in the online version of the course, except that

  • student projects are done individually
  • the critique is done via an online forum, typically through individual learner-tutor interactions
  • the scope of the final developed software is expected to be smaller

The interesting issue is to see how to make the online experience more of a peer-assisted learning opportunity – to get fellow students reflecting on their own proposals, aand helping critique and develop other proposals.  In other words, redesigning how learners and tutors engage online.

How to change online engagement

If the question is how to move to a more visible, peer engagement of learners, the obvious, default position in motivating students by awarding credit applies.  If you comment on two other proposals, you get 5 marks – and/or a badge!

However, a more nuanced approach is possible.  Adapting ideas from general student engagement (e.g. Mark Stubbs 5 top tips these ideas could include

  • better signposting (of what the expected student role might be – top tip 1) or a more explicit invitation to learners contribute their feedback (as per Gilly Salmon’s e-tivities)
  • ensuring a balance of tutor responses to student responses and making sure the former reward contribution, promote self reflection (i.e. take the student voice seriously, tip 2)
  • clear criteria which promote clarity in assessment and therefore good feedback on suggestions (thereby building satisfaction and avoiding dissatisfaction)

But more than this, building student engagement with the learning process means

  • trusting to peer assessment (and being bold about it – tip 4)
  • not thinking we have to watch and comment all the time and if people use alternative technology or spaces not to fret (inspired by tip 5 on the tech wrap around)

And this is where my 30 mins runs out – thinking about the balance between formal assessment and control vs reflection/self assessment which is shown by the questions on Sally Jordon’s blog. And discussion on incentives links back to motivational issues linked to the surface vs deep debate.

But given time allows, here’s more on …

The role of social media

More general discussions on the use of social media raises issue of engagement as a person/student, and engagement as a learner. So while Parcell (Listen, understand, act: social media for engagement) says

The role of social media has the potential to extend beyond learning and teaching to support student engagement in the broadest sense

… I think the challenge is the other way around –thinking about how general engagement with social media can be applied to a specific learning and teaching activity.

The Welsh JISC RSC report on social media gives recommendations under the title of “20 suggestions to enhance your student engagement with social media”.  As an aside, this title raises the questions:

  • is the engagement between you and your students, or is it your students’ engagement with someone/something else.
  • is engagement with social media a means or an end

Most of the recommendations here are too general to be of use in considering how to support students in subject learning related dialogue. That said (potentially contradictory) comments about accessibility (recommendation 16) and going where the students are (recommendations 5, 14), could be taken  to support my previous preferences for VLE based comms with links out to other media and potentially unbounded groups.  Issues about socialisation and scaffolding also raise issues about how a bounded, supportive learning environment might be at odds with an openness promoted by social media use, despite actions on safeguarding.

Conclusions on e-peer support

So back to the re-engineering of assessment. My thoughts on how to change the particular online assessment activity in question would be to say:

  • yes to supporting peer assessment (or at least peer involvement in learning activities related to assessment)
  • yes to more research in to how best to do that, to avoid it becoming a box ticking exercise to gain marks
  • yes to a limited/controlled form of external social media support
  • but most importantly, yes to the keeping the VLE as the default place that tutor-learner interaction happens

As for what happens in September, watch this space. It may be the new cohort whole heartedly want to move the conversation somewhere else, and I as tutor will have to follow.

 

 

 

 

Where’s the E in assessment?

It’s a standard kind of things (lots of our) lecturers do. Weekly tests to keep students on their toes and keep them thinking.  In my case, it’s a final year module on web services, with eleven weeks divided into five main topics with fortnightly “objective” tests, delivered to 20-30 students.

In this post, I want to consider this particular type of assessment and see how the use of technology can impact upon it.

What

A small number of multiple-choice questions are used, sometimes in conjunction with code samples, to test basic understanding.  Over the last couple of years the questions have been delivered in a variety of formats including:

  1. physical, paper based tests in class with emailed feedback
  2. in-class tests with paired students, with (non-E) voting systems, and immediate feedback
  3. downloadable question sheets, uploadable answers, emailed feedback
  4. online MCQ testing with immediate online feedback

In the first two formats, there is very little E.  The third relies on the VLE for communicating information, while the last is the most typical form of e-assessment, relying on the use of an independent MCQ platform.

Why Test

If asked why do regular testing (or when asked – by Octel), my justification or explicit objectives would be based on a subset of something like Chickering and Gamson’s principles such as:

  1. time on task – giving students something to aim for and to ensure engagement with the basic material
  2. high expectations – showing students the kinds of questions we would expect them to be able to answer
  3. prompt feedback – letting students know how well they are doing and whether they need to be changing what/how they are studying

And routine testing can help meet these objectives.  However, there is a risk that this approach does <not> deepen knowledge and understanding.  Instead it might just direct students into learning for the test – a very superficial approach.

A almost equally important question to “why test?” would be “why use technology” to support testing? While some may say that tech supported testing offers a richer testing environment (as shown by the use of video to present alternative routes through a real-life scenario), in practice many of my issues around e-testing are more to do with practicalities rather than pedagogy. It is all too easy to embed simple MCQ questions into online material to give an impression of interaction, without doing anything with the information.

Why Not Test

So how to avoid the pitfalls of superficial online testing?  Its interesting to use the 12 REAP principles to reflect more on my practice, to make sense of what I have tried and think where else I could go. Although principles 1 (good performance), 2 (time and effort) and 3 (quality feedback) match A-C above and could be viewed as already covered, there is clearly much more thinking that could be done.

First up, it is possible to argue that the MCQ testing in itself is not a challenging or interesting learning task, something that REAP promotes (principle 2).  Fortunately, in the module under discussion the MCQ testing is not done in isolation.  Alongside the formative testing, there is a parallel stream of (summatively) assessed practical tasks which provides more challenge.  Making clearer links between these tasks and/or synchronizing the timing could reinforce the value of the formative tests and encourage a deeper approach to learning.  More detailed feedback could also provide an opportunity for the testing to impact learning (principles 4 and 5) as measured or guided by the other summative tasks, provided the learner engages in reflection (principle 7).

The avoidance of a superficial approach can also be addressed by supporting social interaction around the formative testing that promotes peer supported, self-directed collaborative learning.  This is implicit in the classroom approach (II) that uses low tech, “strictly coming dancing” style, colour coded response cards which are shared by pairs of students.  The pairing approach works well in promoting discussion to select the correct answer, and the relatively low number of scorecards provide an easy way of assessing overall performance and providing feedback.  The fact that the feedback is provided in a face to face environment provides more opportunity and encourages for diaglogue (principle 6)

Challenges and Opportunities of e-Testing

The different ways of engaging in testing (on-line/off line, open/closed book, synchronous/ asynchronous) emphasise different REAP principles which might find favour with different teachers.  Interestingly, when students are asked which method they favour there appears to be less variation as they consistently prefer option III – the open book, asynchronous, VLE facilitated tests.

While Involving learners in decision making about assessment practice is one of the REAP principles (9), the preferred student option feels less authentic than the more interactive face to face option (II), or less demanding than the full blown online MCQ with personalised feedback (IV).  However, constraints on time (for option II) or institutional support (for option IV), mean that option III is pragmatically more manageable for the number of students involved.

Despite the challenges of adopting a more varied testing format, REAP inspired reflection does suggest a number of refinements to the testing process, in particular for entirely online students.  One way to increase student reflection, dialogue and the development of learning groups (principle 10) might be to start with individual tests, using the results to select mix-ability learning groups.  The groups could then be tasked with debating and submitting just a single set of agreed answers for each group.  If gameification is seen as a motivating factor, group results could be published via a leaderboard.

The role of technology in testing

In thinking about testing, my first question was where is the “e-“ in this type of assessment.  Or more importantly, what makes it an e-assessment?  And by the way, does being an e-assessment mean it is not possible to undertake it without any technology support?  But on reflection, the lines are blurred and the why is clearly more important that the how.  Technology shouldn’t be the deciding factor in deciding whether we want to do paired or group testing – but it sure helps scale things up from 30 students to 130.

And thinking about technology as an enabler makes it possible to think about re-engineering other assessment opportunities. Rather than just relying on students commenting on other people’s project suggestions in a forum, why not build a more structured online peer review element into the proposal stage … now there’s a (not very novel) idEa!

Technology for Pedagogy: from purpose to activity

I was very interested to see technology selection as a topic for discussion in ocTEL a couple of weeks back.  Its taken me a while to catch up and in some ways I wish I hadn’t.

The aim is to think about how pedagogy drives technology selection and the context is provided by a model expounded by Hill et al (2012) in discussing the derivation of electronic course templates for use in higher education.  In the way of all good learning experiences, applying the details of the model to an instance of our own personal experience should help better assimilation of it.

A case in point: sharing resources

The instance of technology selection that immediately sprang to mind was the choice of a platform to allow online students to share resources.  The issues at the time that came to mind in considering the desired technological properties focussed on:

  • would everyone have access to the technology chosen (i.e accessibility)
  • would everyone be able to find out/know how to use it (navigability)
  • would everyone find it easy to incorporate its use into their studies (interaction)

The pedagogical purpose of the technology was taken as a given.  A creative activity normally done in a face to face context was being moved online.  The main learning outcome was to explore how different online services could be mashed up (in terms of data and/or functionality) to produce new, useful systems.  In the f2f activity, learners would go on to undertake a group project, sharing and refining ideas about what they would do.  In the online version, learners would undertake an individual project where the refinement process would benefit from support from tutors and fellow students.  In flavour, this is could be seen as taking a constructive approach with students having ownership of the task and being coached and supported in developing their understanding of how to characterise and decompose services.

Modelling the pedagogy

Hill’s framework proposes a number of different pedagogical dimensions which can be used to analyse a course to facilitate the choice and method of use of technologies, albeit in the context of a course VLE/MLE template. These pedagogical dimensions include:

  • logistical: capturing differences in how many learners were taking the module, and when
  • practice: “emerging” from teaching and learning activities, and participants prior expertise
  • purpose: with a plan fostered by a teaching approach, strategy and/or theory
  • participation: dealing with the contact environment and extent of online activity or “web work”

In principle these dimensions could be applied to analyse the case study above to identify technology requirements.  Logistically we could be dealing with 20-30 masters students in either a f2f or online course instance.  The activity itself (i.e. practice), however it is undertaken, is identical in outcome, with the same purpose (as part of a cognitive approach with situative overtones).  The main area of difference is in participation in which online students are supported by technology in sharing resources and discussing ideas, whereas f2f students are left to manage this themselves within their own groups.

In practice, I find Hill’s dimensions are poorly delineated and hard to relate to desired properties of technology. In particular, in designing a teaching and learning task, one needs to consider why it is being done (purpose), what will be done (practice) how people will engage in it (participation), and how well it will scale (logistics).  These decisions are not independent, as suggested by the notion of different “dimensions”.  Given this, it is unsurprising that the authors of this model struggled to identify “robust associations between pedagogical dimensions and course site properties”.

Scaling the pedagogy/technology choice

The core role of pedagogy in deciding technology can be seen in examples taken from Hill et al.  They provide technology templates to support:

  • a Cornerstones approach in which a content repository supports a didactic, on campus delivery model
  • a Web 2.0 approach in which functionality is provided to support collaboration outside of class in a situative, blended model

However, these examples also highlight the poor structure of Hill’s approach to thinking about or decomposing the pedagogy.  This is most strongly shown in the stated purpose of the Web 2.0 template, which should according to the description of dimensions of the model should state the “teaching philosophy or methodology” and “include(s) pedagogical intentions”.  While this is partly addressed by stating a desire to “foster collaboration”, this philosophy is undermined by a list of functionality which includes “wikis, blogs, polls and social networking”.  While the case could be made for these tools as part of the pedagogic intent, the same cannot be said for also including – as part of the pedagogic purpose – “filespace, announcements, calendar, discussions, project assignments, and (asynchronous) messages”.

The Web 2.0 template described above is a great example of poor course design in which technology is substituted for pedagogy.  In the same way adding a user profiles to an e-commerce site does not turn it into a social network, adding a a range of web 2.0 technology to a VLE does not turn a course into a collaborative, situative/contructivist learning experience.

Pedagogy not Technology

So if Hill’s pedagogy/technology lattice does not help us in technology selection, what should we be doing?  My short answer would be that we should be separating learning design from technology selection.

I am all in favour of seeing technology as an enabler.  Indeed, I am keen on adapting Bismark and seeing it as defining the “art of the possible”.  However, I believe we need to think about pedagogy before technology.

This view is very well articulated by Mayes and de Freitas in which they observe that the idea that

“the presentation of subject matter using multimedia … would lead to better
learning … was responsible for much of the disillusionment that resulted
from computer-based learning in the 1980s and 90s”.

I believe simply adding YouTube videos or leaderboards to a VLE will have a similar effect in the 2010s.

Selecting Technology – Revisited

So if your course design process has identified a need for a collaborative blogging platform or a video sharing site, and to Hill’s model is not useful in actually helping you select one, what is going to help?

There a a whole range of methodologies, strategies and issues to think about. One of my favourites is the technology acceptance model which focusses on usefulness and usability. But a more widely adopted idea, that builds on the idea of usability, is that of User eXperience (UX) design or modelling, which is in fact one of the parts of Hill’s model. Its worth pointing out that this is not a question of what experience a user brings to a particular interaction, website or system. Instead, it focusses on what kind of experience a visitor has and how they perceive the system in question.

User experience encompasses a whole range of ideas including consideration of:
– different user profiles, demographics, goals, technology etc
– visual design, including layout, metaphors, affordances etc
– information architecture that determines the structure and navigation of content
– interaction design to consider how users do things with the system
– affective design about how users emotions and mental state may be varied
– personalisation and adaptability
… and a bunch more beside.

While consideration of the user experience won’t in and of itself tell you which system or platform to choose, it may help you with the questions you need to ask of any one solution you might be considering.

Reusable Computational Thinking

My aim, documented in this post, has been to find an open educational resource to support the activity defined in “Mines’ a mocha“.  Inspired by the ocTEL week 3 webinar, I searched in PhET but this was not helpful.

A simple look in OpenLearn threw up an “Introduction to computational thinking” which is exactly the right sounding topic.

A look in iTunesU proved much more complex, from having to log in (I think!) to interpreting search results.  There appeared to be something relevant from MIT’s Open Courseware programme – more on this below.

Finally, a quick (Google) search of YouTube throws up a bunch of video on computational thinking.   Key issue is which to try based on simple info of duration, author/uploader, and description.  Some results also include an indication of channel subscribers and number of comments as an indication of popularity but I did not find this generally helpful.

Open Learn Resource Review

Interestingly the OpenLearn resource is “an adapted extract from the Open University course M269 Algorithms, data structures and computability” and in turn i“Much of the material in this unit is organised around video clips from a presentation that (Jeannette M. Wing, Professor of Computer Science at Carnegie Mellon University and Head of Microsoft Research International) gave in 2009 entitled ‘Computational Thinking and Thinking About Computing’”

There is an indication of the aim (and implied audience) from the original OER given “the presentation builds on Wing’s influential 2006 ‘Computational Thinking’ paper in which she set out to ‘spread the joy, awe, and power of computer science, aiming to make computational thinking commonplace’ (Wing, 2006, p. 35)”

This is clearly an example of one OER building on another OER in the spirit of the Creative Commons “CC BY” licence which “lets others distribute, remix, tweak, and build upon your work“.

Ability to Reuse

Checked out the OpenLearn FAQ with questions on

All this seems to support reuse within our HE programs provided we are not charging for straight access to resource, but instead charge for a learning experience – as per this week’s webinar discussion.

What to Reuse

Learning outcomes for the OpenLearn resource seem to match what we are trying to develop, though level may be higher!

The resource has a nice definition of an algorithm – would we want to highlight this or reuse it in our own resource?  Given the structure of the web content, would be hard to do the former – unless we could use annotation tool such as Diigo – or ScoopIt?

Discussion of real world-mathematical model relationship is too detailed/ off topic but has nice drawing to match things up – a graphical MCQ/objective test question

… but leads on to discussion of abstraction – for which there is an entry in the glossary, but which is perhaps too technical for KS3!

Would need to watch video to decide if that was good resource to reuse.

How to Reuse

On first inspection, parts of this resource would be better to be reused as background material.  Main criteria for selection would be appropriateness to learners and learning outcomes.

iTunes / MIT Open Courseware Review

The licencing is clear and up front – CC BY-NC-SA but version 3 compared to OU version 4

The intended audience and aims are clear given the course is “… aimed at students with little or no programming experience. It aims to provide students with an understanding of the role computation can play in solving problems” which could be taken to be same as the OpenLearn rsource.

However, OCW “also aims to help students, regardless of their major, to feel justifiably confident of their ability to write small programs that allow them to accomplish useful goals. The class will use the Python™ programming language.”   This suggests a much more technical course.

And wow – the course is a blast from the past, despite being recorded in 2011.  Chalk and talk with very few notes being written on board.  And starting point is the programming language and tools.  It is only 30+minutes in that the first concept of sequences or “straight line” programmes are introduced.  A reference back to “last time” to recipes suggests that this is wrong lecture to start with!

Went back to first lecture where first notes are “declarative & imperative”!  We get to “algorithm” 15 mins in, defined in terms of “how to perform a computation”, with additional concepts of “convergence” or halting.  The reference to recipes in lecture 2 is clearly to the metaphorical rather than edible type!

I know people who have sat through exactly these types of introductory” lecture – that put people with no programming experience off programming altogether. Questions of what and how to reuse are “nothing” and “don’t”.

Just as importantly as the content itself, while iTunesU gives you links to access the content, e.g. https://itunes.apple.com/gb/podcast/lecture-2-core-elements-program/id499270153?i=110101057&mt=2, iTunes is still required to play it.  This may act as a barrier to people who do not have iTunes (and an appleID?).  Mashing up the content then becomes more difficult.

YouTube Review

First video on computational thinking is way to general – and takes 3 plus minutes out of 3 and half to get to a definition of computational thinking as “critical thinking + the power of computing”.  Not memorable (I had to go back and check!) or particularly useful.

Second search on YouTube (adding terms for the basic concepts of sequence, selection and iteration) pulled up more promising videos, but information on them is patchy, granularity is small, and for the context, you need to start to watch them.

One of the results my search for the basic concepts actually throws up is a short video from Durham College (who/where?) that places the concepts I am looking for in the context of:

  • a C# programming course, where …
  • students have read 4 chapters of a book, and
  • attended one lab session
  • and are now ready for an overview of control structures

I am not sure this is my intended audience.

What to Reuse

The video introduces notions we are after with clear, if mathematical, examples.  Looking to see how to reference a particular point in a You tube video we can see:

How to Reuse

There is no clear indication of licensing for this video as the share tab is just a way to generate the URL or web address to use.

However, the material is of reasonable quality that might make interesting background for more mathematically minded students.

The ability to reference a particular part of the video direct in a browser is a distinct advantage over iTunesU where content access and/or download requires iTunes

Conclusions

The use of more instructional design or structured descriptions for the resources makes it easier to assess if they are going to be appropriate at the start. This then favours a more structured source than an open search.

Conversely, the ability to mash up, remix or build upon content is easier using openly accessible content that can be references by web address or URL, as opposed to resources which are accessed in closed systems or are not designed for remashing.

As for the difference in type of resources, e.g. text vs video vs interactive elements vs quiz etc, this is something to be discussed elsewhere.