Statement from the Vice-Chancellor on industrial action

Cambridge University NewsFeed - Thu, 03/01/2018 - 17:45

"I recognise UUK’s limited room for manoeuvre due to extremely low real interest rates and the views of the Pensions Regulator, who will ultimately decide on the scheme’s viability. Our influence is, therefore, limited. But I strongly support the exploration of ideas to resolve this situation, including those put forward by UCU. There has to be compromise. I hope that further disruption to students’ studies can be avoided while talks continue.

"The current situation cannot go on. It has, understandably, led to anger from staff and anxiety from students. I therefore urge the parties to agree a pragmatic solution to bring to an end the current dispute. Once this has been achieved we can focus on a long-term, sustainable solution which is in the best interests of the sector, the University and individual members of the USS.

"Pensions form a key component of our compensation package for staff and they play a significant role in the attractiveness of the UK’s higher education sector to talented individuals from around the world.

"Cambridge University has been actively working on options for some time and we have been discussing these with UUK. We believe a sector-wide scheme has significant benefits. One option to maintain a sector-wide approach, at least in the short-term, would be an alternative that retains a Defined Benefit (DB) element, but combines it with a Defined Contribution (DC) component along the lines of our existing Cambridge University Assistants’ Contributory Pension Scheme (CPS).

"There are other approaches that could be explored as longer-term solutions. These could include a Collective DC scheme, similar to that being considered by the Royal Mail, or a government-backed solution. These might offer better benefits than the current scheme, yet still be affordable for universities. However, these require new legislation or government action.

"If all else fails and no sector-wide scheme is deliverable, Cambridge will have to consider whether there is scope for a Cambridge-specific scheme – either within or outside the USS. We must recognise, however, that there are serious obstacles to such an approach.

"Cambridge University is also prepared to consider assuming the costs of additional contributions in the short-term should no other option be viable. It should be noted, however, that this approach would likely require trade-offs and cuts in other parts of the University. 

"You have my absolute commitment to working with all parties to find a way through this dispute; a way which recognises the concerns of our staff, ensures the sustainability of the University, and maintains an excellent education for our students. Such an outcome is imperative if we are to safeguard the global leadership of the UK’s higher education sector, and of Cambridge University in particular."

Professor Stephen J. Toope

"I welcome the commitment to further talks between UCU and UUK to end the current strike.

The text in this work is licensed under a Creative Commons Attribution 4.0 International License. For image use please see separate credits above.

Categories: Cambridge, Cambridgeshire

Council supports Cambridge Half Marathon as part of its work supporting people to get more active

Cambridge Council Feed - Thu, 03/01/2018 - 11:55

THE Saucony Cambridge Half Marathon returns to the city’s streets and open spaces this weekend – and it is set to be the biggest yet.

Cambridge City Council is working closely with event organisers to welcome more than 9,000 runners to the city on Sunday 4 March for the 13.1 mile race, which is now in its seventh year.

Categories: Cambridge, Cambridgeshire

Cambridge kids show you can make a rainbow - even when it's snowing

Cambridge University NewsFeed - Wed, 02/28/2018 - 16:15

The school children were joined by staff and Eddington residents who each donned clothing to match one colour in the rainbow in a show of support for diversity.

With temperatures plummeting to -3°C and snow flurries disrupting plans for an outdoor celebration, the local Eddington and Cambridge community packed into the assembly hall in a show of solidarity.

Vice-Chancellor of the University of Cambridge, Professor Stephen Toope also attended, saying: “As LGBT+ History Month reaches its end, we have much to celebrate. Exhibitions, talks and performances have charted the rich and vibrant history of the LGBT+ community – but also its struggles.

“In my field of law, there have been advances in gaining equality for LGBT people – from protection from discrimination, to celebration of civil unions and parenthood. But equality in law doesn’t always translate into equality in life. That’s why we will keep up our efforts to celebrate Cambridge’s diversity.

“Specific initiatives, including the School of Humanities and Social Sciences’ recently announced programme focusing on LGBTQ+ research, illustrate the importance of acknowledging this diversity in our academic pursuits as well as in our daily lives. Our primary school continues to embrace opportunities to define what a truly inclusive education could be.

“LGBT+ History Month has shown what we can achieve when we all work together with the common goal of creating the Cambridge we want to live, work in and  study in. We are committed to being a place where people are allowed to be themselves – to think their own way, define their own boundaries and form their own identities.

“Thanks to all of you who have participated, given your support and helped Cambridge to be a welcoming, open and tolerant place.”

LGBT+ History Month takes place every February to promote the visibility of lesbian, gay, bisexual and transgender people, their history, lives and experience. This encourages diversity and equality, as well as raising awareness and advancing education on matters affecting the LGBT+ community.

Eddington’s rainbow photo call marked the end of the month of activities across the school, University and society to raise awareness of and celebrate the LGBT+ community. 

Heather Topel, Project Director for the North West Cambridge Development, said: “The Eddington Rainbow was a success and we are pleased to support the development of Eddington as a new community in Cambridge that is open to all.  We will be hosting a range of events throughout the year that support the broad range of individuals and communities that are part of Cambridge.”

Three hundred pupils at the University of Cambridge Primary School formed a giant rainbow to mark the end of LGBT+ history month today.

Our primary school continues to embrace opportunities to define what a truly inclusive education could beVice-Chancellor Professor Stephen ToopeUniversity of Cambridge Primary School rainbow flag

The text in this work is licensed under a Creative Commons Attribution 4.0 International License. For image use please see separate credits above.

Categories: Cambridge, Cambridgeshire

Wintry conditions trigger severe weather provision to help rough sleepers

Cambridge Council Feed - Wed, 02/28/2018 - 11:27

THE current spell of cold weather has seen Cambridge City Council triggering the Severe Weather Emergency Provision (SWEP) which provides emergency accommodation for rough sleepers. 

Under the scheme, anyone who would otherwise have to sleep in the open will be offered emergency accommodation free of charge.  The only qualification for being offered a bed is an acceptable level of behaviour.

This is the fifth time this season that emergency accommodation has been laid on.  In total so far this winter additional shelter for rough sleepers has been available on 39 nights. 

Categories: Cambridge, Cambridgeshire

Living with artificial intelligence: how do we get it right?

Cambridge University NewsFeed - Wed, 02/28/2018 - 11:00

This has been the decade of AI, with one astonishing feat after another. A chess-playing AI that can defeat not only all human chess players, but also all previous human-programmed chess machines, after learning the game in just four hours? That’s yesterday’s news, what’s next?

True, these prodigious accomplishments are all in so-called narrow AI, where machines perform highly specialised tasks. But many experts believe this restriction is very temporary. By mid-century, we may have artificial general intelligence (AGI) – machines that are capable of human-level performance on the full range of tasks that we ourselves can tackle.

If so, then there’s little reason to think that it will stop there. Machines will be free of many of the physical constraints on human intelligence. Our brains run at slow biochemical processing speeds on the power of a light bulb, and need to fit through a human birth canal. It is remarkable what they accomplish, given these handicaps. But they may be as far from the physical limits of thought as our eyes are from the Webb Space Telescope.

Once machines are better than us at designing even smarter machines, progress towards these limits could accelerate. What would this mean for us? Could we ensure a safe and worthwhile coexistence with such machines?

On the plus side, AI is already useful and profitable for many things, and super AI might be expected to be super useful, and super profitable. But the more powerful AI becomes, the more we ask it to do for us, the more important it will be to specify its goals with great care. Folklore is full of tales of people who ask for the wrong thing, with disastrous consequences – King Midas, for example, who didn’t really want his breakfast to turn to gold as he put it to his lips.

So we need to make sure that powerful AI machines are ‘human-friendly’ – that they have goals reliably aligned with our own values. One thing that makes this task difficult is that by the standards we want the machines to aim for, we ourselves do rather poorly. Humans are far from reliably human-friendly. We do many terrible things to each other and to many other sentient creatures with whom we share the planet. If superintelligent machines don’t do a lot better than us, we’ll be in deep trouble. We’ll have powerful new intelligence amplifying the dark sides of our own fallible natures.

For safety’s sake, then, we want the machines to be ethically as well as cognitively superhuman. We want them to aim for the moral high ground, not for the troughs in which many of us spend some of our time. Luckily they’ll have the smarts for the job. If there are routes to the uplands, they’ll be better than us at finding them, and steering us in the right direction. They might be our guides to a much better world.

However, there are two big problems with this utopian vision. One is how we get the machines started on the journey, the other is what it would mean to reach this destination. The ‘getting started’ problem is that we need to tell the machines what they’re looking for with sufficient clarity and precision that we can be confident that they will find it – whatever ‘it’ actually turns out to be. This is a daunting challenge, given that we are confused and conflicted about the ideals ourselves, and different communities might have different views.

The ‘destination’ problem is that, in putting ourselves in the hands of these moral guides and gatekeepers, we might be sacrificing our own autonomy – an important part of what makes us human.

Just to focus on one aspect of these difficulties, we are deeply tribal creatures. We find it very easy to ignore the suffering of strangers, and even to contribute to it, at least indirectly. For our own sakes, we should hope that AI will do better. It is not just that we might find ourselves at the mercy of some other tribe’s AI, but that we could not trust our own, if we had taught it that not all suffering matters. This means that as tribal and morally fallible creatures, we need to point the machines in the direction of something better. How do we do that? That’s the getting started problem.

As for the destination problem, suppose that we succeed. Machines who are better than us at sticking to the moral high ground may be expected to discourage some of the lapses we presently take for granted. We might lose our freedom to discriminate in favour of our own tribes, for example.

Loss of freedom to behave badly isn’t always a bad thing, of course: denying ourselves the freedom to keep slaves, or to put children to work in factories, or to smoke in restaurants are signs of progress. But are we ready for ethical overlords – sanctimonious silicon curtailing our options? They might be so good at doing it that we don’t notice the fences; but is this the future we want, a life in a well-curated moral zoo?

These issues might seem far-fetched, but they are already on our doorsteps. Imagine we want an AI to handle resource allocation decisions in our health system, for example. It might do so much more fairly and efficiently than humans can manage, with benefits for patients and taxpayers. But we’d need to specify its goals correctly (e.g. to avoid discriminatory practices), and we’d be depriving some humans (e.g. senior doctors) of some of the discretion they presently enjoy. So we already face the getting started and destination problems. And they are only going to get harder.

This isn’t the first time that a powerful new technology has had moral implications. Speaking about the dangers of thermonuclear weapons in 1954, Bertrand Russell argued that to avoid wiping ourselves out “we have to learn to think in a new way”. He urged his listener to set aside tribal allegiances and “consider yourself only as a member of a biological species... whose disappearance none of us can desire.”

We have survived the nuclear risk so far, but now we have a new powerful technology to deal with – itself, literally, a new way of thinking. For our own safety, we need to point these new thinkers in the right direction, and get them to act well for us. It is not yet clear whether this is possible, but if so it will require the same cooperative spirit, the same willingness to set aside tribalism, that Russell had in mind.

But that’s where the parallel stops. Avoiding nuclear war means business as usual. Getting the long-term future of life with AI right means a very different world. Both general intelligence and moral reasoning are often thought to be uniquely human capacities. But safety seems to require that we think of them as a package: if we are to give general intelligence to machines, we’ll need to give them moral authority, too. That means a radical end to human exceptionalism. All the more reason to think about the destination now, and to be careful about what we wish for.

Inset image: read more about our AI research in the University's research magazine; download a pdf; view on Issuu.

Professor Huw Price and Dr Karina Vold are at the Faculty of Philosophy and the Leverhulme Centre for the Future of Intelligence, where they work on 'Agents and persons'. This theme explores the nature and future of AI agency and personhood, and our impact on our human sense on what it means to be a person.

Powerful AI needs to be reliably aligned with human values. Does this mean that AI will eventually have to police those values? Cambridge philosophers Huw Price and Karina Vold consider the trade-off between safety and autonomy in the era of superintelligence.

For safety’s sake, we want the machines to be ethically as well as cognitively superhuman. We want them to aim for the moral high ground, not for the troughs in which many of us spend some of our timeHuw Price and Karina VoldGIC on Stocksy

The text in this work is licensed under a Creative Commons Attribution 4.0 International License. For image use please see separate credits above.

YesRelated Links: Leverhulme Centre for the Future of Intelligence
Categories: Cambridge, Cambridgeshire

Silent witnesses: how an ice age was written in the trees

Cambridge University NewsFeed - Tue, 02/27/2018 - 12:00

Researchers use tree rings to unravel past climates and their impact on civilisations. 


What connects a series of volcanic eruptions and severe summer cooling with a century of pandemics, human migration and the rise and fall of civilisations? Tree rings, says Ulf Büntgen, who leads Cambridge’s first dedicated tree-ring laboratory at the Department of Geography.

Once you embark on these integrative approaches you can ask questions like how did complex societies cope with climate change? That’s when it starts to get really exciting.Ulf BüntgenHrafn ÓskarssonSubfossil trees preserved in Iceland

The text in this work is licensed under a Creative Commons Attribution 4.0 International License. For image use please see separate credits above.

Categories: Cambridge, Cambridgeshire

Identification of brain region responsible for alleviating pain could lead to development of opioid alternatives

Cambridge University NewsFeed - Tue, 02/27/2018 - 08:00

The team, led by the University of Cambridge, have pinpointed an area of the brain that is important for endogenous analgesia – the brain’s intrinsic pain relief system. Their results, published in the open access journal eLife, could lead to the development of pain treatments that activate the painkilling system by stimulating this area of the brain, but without the dangerous side-effects of opioids.

Opioid drugs such as oxycodone, hydrocodone and fentanyl hijack the endogenous analgesia system, which is what makes them such effective painkillers. However, they are also highly addictive, which has led to the opioid crisis in the United States, where drug overdose is now the leading cause of death for those under 50, with opioid overdoses representing two-thirds of those deaths.

“We’re trying to understand exactly what the endogenous analgesia system is: why we have it, how it works and where it is controlled in the brain,” said Dr Ben Seymour of Cambridge’s Department of Engineering, who led the research. “If we can figure this out, it could lead to treatments that are much more selective in terms of how they treat pain.”

Pain, while unpleasant, evolved to serve an important survival function. After an injury, for instance, the persistent pain we feel saps our motivation, and so forces us towards rest and recuperation which allows the body to use as much energy as possible for healing.

“Pain can actually help us recover by removing our drive to do unnecessary things - in a sense, this can be considered ‘healthy pain’,” said Seymour. “So why might the brain want to turn down the pain signal sometimes?”

Seymour and his colleagues thought that sometimes this ‘healthy pain’ could be a problem, especially if we could actively do something that might help - such as try and find a way to cool a burn.

In these situations, the brain might activate the pain-killing system to actively look for relief. To prove this, and to try and identify where in the brain this system was activated, the team designed a pair of experiments using brain scanning technology.

In the first experiment, the researchers attached a metal probe to the arm of a series of healthy volunteers - and heated it up to a level that was painful, but not enough to physically burn them. The volunteers then played a type of gambling game where they had to find which button on a small keypad cooled down the probe. The level of difficulty was varied over the course of the experiments - sometimes it was easy to turn the probe off, and sometimes it was difficult. Throughout the task, the volunteers frequently rated their pain, and the researchers constantly monitored their brain activity.

The results found that the level of pain the volunteers experienced was related to how much information there was to learn in the task. When the subjects were actively trying to work out which button they should press, pain was reduced. But when the subjects knew which button to press, it wasn't. The researchers found that the brain was actually computing the benefits of actively looking for and remembering how they got relief, and using this to control the level of pain.

Knowing what this signal should look like, the researchers then searched the brain to see where it was being used. The second experiment identified the signal in a single region of the prefrontal cortex, called the pregenual cingulate cortex.

“These results build a picture of why and how the brain decides to turn off pain in certain circumstances, and identify the pregenual cingulate cortex as a critical ‘decision centre’ controlling pain in the brain,” said Seymour.

This decision centre is a key place to focus future research efforts. In particular, the researchers are now trying to understand what the inputs are to this brain region, if it is stimulated by opioid drugs, what other chemical messenger systems it uses, and how it could be turned on as a treatment for patients with chronic pain.

Suyi Zhang et al. ‘The control of tonic pain by active relief learning.’ eLife (2018). DOI:

Researchers from the UK & Japan have identified how the brain’s natural painkilling system could be used as a possible alternative to opioids for the effective relief of chronic pain, which affects as many as one in three people at some point in their lives. 

Pain can actually help us recover by removing our drive to do unnecessary things - in a sense, this can be considered ‘healthy pain’.Ben SeymourPenn StatePrescription bottle for Oxycodone tablets and pills on metal table

The text in this work is licensed under a Creative Commons Attribution 4.0 International License. For image use please see separate credits above.

YesLicense type: Attribution-Noncommerical
Categories: Cambridge, Cambridgeshire

New evidence suggests nutritional labelling on menus may reduce our calorie intake

Cambridge University NewsFeed - Tue, 02/27/2018 - 00:21

Eating too many calories contributes to people becoming overweight and increases the risks of heart disease, diabetes and many cancers, which are among the leading causes of poor health and premature death.

Several studies have looked at whether putting nutritional labels on food and non-alcoholic drinks might have an impact on their purchasing or consumption, but their findings have been mixed. Now, a team of Cochrane researchers has brought together the results of studies evaluating the effects of nutritional labels on purchasing and consumption in a systematic review.

The team reviewed the evidence to establish whether and by how much nutritional labels on food or non-alcoholic drinks affect the amount of food or drink people choose, buy, eat or drink. They considered studies in which the labels had to include information on the nutritional or calorie content of the food or drink. They excluded those including only logos (e.g. ticks or stars), or interpretative colours (e.g. ‘traffic light’ labelling) to indicate healthier and unhealthier foods. In total, the researchers included evidence from 28 studies, of which 11 assessed the impact of nutritional labelling on purchasing and 17 assessed the impact of labelling on consumption.

The team combined results from three studies where calorie labels were added to menus or put next to food in restaurants, coffee shops and cafeterias. For a typical lunch with an intake of 600 calories, such as a slice of pizza and a soft drink, labelling may reduce the energy content of food purchased by about 8% (48 calories). The authors judged the studies to have potential flaws that could have biased the results.

Combining results from eight studies carried out in artificial or laboratory settings could not show with certainty whether adding labels would have an impact on calories consumed. However, when the studies with potential flaws in their methods were removed, the three remaining studies showed that such labels could reduce calories consumed by about 12% per meal. The team noted that there was still some uncertainty around this effect and that further well conducted studies are needed to establish the size of the effect with more precision.

The Review’s lead author, Professor Theresa Marteau, Director of the Behaviour and Health Research Unit at the University of Cambridge, UK, says: “This evidence suggests that using nutritional labelling could help reduce calorie intake and make a useful impact as part of a wider set of measures aimed at tackling obesity,” She added, “There is no ‘magic bullet’ to solve the obesity problem, so while calorie labelling may help, other measures to reduce calorie intake are also needed.”

Author, Professor Susan Jebb from the University of Oxford commented: “Some outlets are already providing calorie information to help customers make informed choices about what to purchase. This review should provide policymakers with the confidence to introduce measures to encourage or even require calorie labelling on menus and next to food and non-alcoholic drinks in coffee shops, cafeterias and restaurants.”

The researchers were unable to reach firm conclusions about the effect of labelling on calories purchased from grocery stores or vending machines because of the limited evidence available. They also added that future research would also benefit from a more diverse consideration of the possible wider impacts of nutritional labelling including impacts on those producing and selling food, as well as consumers.

Professor Ian Caterson, President of the World Obesity Federation, commented: “Energy labelling has been shown to be effective: people see it and read it and there is a resulting decrease in calories purchased. This is very useful to know – combined with a suite of other interventions, such changes will help slow and eventually turnaround the continuing rise in body weight.”

Crockett RA, et al. Nutritional labelling for healthier food or non-alcoholic drink purchasing and consumption. Cochrane Database of Systematic Reviews 2018, Issue 2. Art. No.: CD009315.

New evidence published in the Cochrane Library today shows that adding calorie labels to menus and next to food in restaurants, coffee shops and cafeterias, could reduce the calories that people consume, although the quality of evidence is low. 

There is no ‘magic bullet’ to solve the obesity problem, so while calorie labelling may help, other measures to reduce calorie intake are also neededTheresa MarteauMichael SternWall_Food_10087

The text in this work is licensed under a Creative Commons Attribution 4.0 International License. For image use please see separate credits above.

YesLicense type: Attribution-ShareAlike
Categories: Cambridge, Cambridgeshire

Scientists link genes to brain anatomy in autism

Cambridge University NewsFeed - Mon, 02/26/2018 - 18:59

Previous studies have reported differences in brain structure of autistic individuals. However, until now, scientists have not known which genes are linked to these differences.

The team at the Autism Research Centre analysed magnetic resonance imaging (MRI) brain scans from more than 150 autistic children and compared them with MRI scans from similarly aged children but who did not have autism. They looked at variation in the thickness of the cortex, the outermost layer of the brain, and linked this to gene activity in the brain.

They discovered a set of genes linked to differences in the thickness of the cortex between autistic kids and non-autistic children. Many of these genes are involved in how brain cells (or neurons) communicate with each other. Interestingly, many of the genes identified in this study have been shown to have lower gene activity at the molecular level in autistic post mortem brain tissue samples.

The study was led by two postdoctoral scientists, Dr Rafael Romero-Garcia and Dr Richard Bethlehem, and Varun Warrier, a PhD student. The study is published in the journal Molecular Psychiatry and provides the first evidence linking differences in the autistic brain to genes with atypical gene activity in autistic brains.

Dr Richard Bethlehem said: “This takes us one step closer to understanding why the brains of people with and without autism may differ from one another. We have long known that autism itself is genetic, but by combining these different data sets (brain imaging and genetics) we can now identify more precisely which genes are linked to how the autistic brain may differ. In essence, we are beginning to link molecular and macroscopic levels of analysis to better understand the diversity and complexity of autism.”

Varun Warrier added: “We now need to confirm these results using new genetic and brain scan data so as to understand how exactly gene activity and thickness of the cortex are linked in autism.”

“The identification of genes linked to brain changes in autism is just the first step,” said Dr Rafael Romero-Garcia. “These promising findings reveal how important multidisciplinary approaches are if we want to better understand the molecular mechanisms underlying autism. The complexity of this condition requires a joint effort from a wide scientific community.”

The research was supported by the Medical Research Council, the Autism Research Trust, the Wellcome Trust, and the Templeton World Charity Foundation, Inc.

Romero-Garcia, R et al. Synaptic and transcriptionally downregulated genes are associated with cortical thickness differences in autism. Molecular Psychiatry; 26 Feb; DOI: 10.1038/s41380-018-0023-7

A team of scientists at the University of Cambridge has discovered that specific genes are linked to individual differences in brain anatomy in autistic children.

This takes us one step closer to understanding why the brains of people with and without autism may differ from one anotherRichard BethlehemLance NeilsonWhat are you looking at?

The text in this work is licensed under a Creative Commons Attribution 4.0 International License. For image use please see separate credits above.

YesLicense type: Attribution
Categories: Cambridge, Cambridgeshire

Bin crews ready to tackle snow

Cambridge Council Feed - Mon, 02/26/2018 - 16:55

Waste bosses have confirmed that bin crews are ready to tackle forecasted snow over the next few days and plans are in place to try and minimise disruption to collections.

Latest Met Office weather warnings show Cambridgeshire is expecting snow and freezing temperatures tonight (Monday) and for at least the next two days.

Categories: Cambridge, Cambridgeshire

Helping police make custody decisions using artificial intelligence

Cambridge University NewsFeed - Mon, 02/26/2018 - 13:20

"It’s 3am on Saturday morning. The man in front of you has been caught in possession of drugs. He has no weapons, and no record of any violent or serious crimes. Do you let the man out on police bail the next morning, or keep him locked up for two days to ensure he comes to court on Monday?”

The kind of scenario Dr Geoffrey Barnes is describing – whether to detain a suspect in police custody or release them on bail – occurs hundreds of thousands of times a year across the UK. The outcome of this decision could be major for the suspect, for public safety and for the police.

“The police officers who make these custody decisions are highly experienced,” explains Barnes. “But all their knowledge and policing skills can’t tell them the one thing they need to now most about the suspect – how likely is it that he or she is going to cause major harm if they are released? This is a job that really scares people – they are at the front line of risk-based decision-making.”

Barnes and Professor Lawrence Sherman, who leads the Jerry Lee Centre for Experimental Criminology in the University of Cambridge’s Institute of Criminology, have been working with police forces around the world to ask whether AI can help. 

“Imagine a situation where the officer has the benefit of a hundred thousand, and more, real previous experiences of custody decisions?” says Sherman. “No one person can have that number of experiences, but a machine can.”

In mid-2016, with funding from the Monument Trust, the researchers installed the world’s first AI tool for helping police make custodial decisions in Durham Constabulary.

Called the Harm Assessment Risk Tool (HART), the AI-based technology uses 104,000 histories of people previously arrested and processed in Durham custody suites over the course of five years, with a two-year follow-up for each custody decision. Using a method called “random forests”, the model looks at vast numbers of combinations of ‘predictor values’, the majority of which focus on the suspect’s offending history, as well as age, gender and geographical area. 

“These variables are combined in thousands of different ways before a final forecasted conclusion is reached,” explains Barnes. “Imagine a human holding this number of variables in their head, and making all of these connections before making a decision. Our minds simply can’t do it.”

The aim of HART is to categorise whether in the next two years an offender is high risk (highly likely to commit a new serious offence such as murder, aggravated violence, sexual crimes or robbery); moderate risk (likely to commit a non-serious offence); or low risk (unlikely to commit any offence). 

“The need for good prediction is not just about identifying the dangerous people,” explains Sherman. “It’s also about identifying people who definitely are not dangerous. For every case of a suspect on bail who kills someone, there are tens of thousands of non-violent suspects who are locked up longer than necessary.”

Durham Constabulary want to identify the ‘moderate-risk’ group – who account for just under half of all suspects according to the statistics generated by HART. These individuals might benefit from their Checkpoint programme, which aims to tackle the root causes of offending and offer an alternative to prosecution that they hope will turn moderate risks into low risks. 

“It’s needles and haystacks,” says Sherman. “On the one hand, the dangerous ‘needles’ are too rare for anyone to meet often enough to spot them on sight. On the other, the ‘hay’ poses no threat and keeping them in custody wastes resources and may even do more harm than good.” A randomised controlled trial is currently under way in Durham to test the use of Checkpoint among those forecast as moderate risk.

HART is also being refreshed with more recent data – a step that Barnes explains will be an important part of this sort of tool: “A human decision-maker might adapt immediately to a changing context – such as a prioritisation of certain offences, like hate crime – but the same cannot necessarily be said of an algorithmic tool. This suggests the need for careful and constant scrutiny of the predictors used and for frequently refreshing the algorithm with more recent historical data.”

No prediction tool can be perfect. An independent validation study of HART found an overall accuracy of around 63%. But, says Barnes, the real power of machine learning comes not from the avoidance of any error at all but from deciding which errors you most want to avoid. 

“Not all errors are equal,” says Sheena Urwin, head of criminal justice at Durham Constabulary and a graduate of the Institute of Criminology’s Police Executive Master of Studies Programme. “The worst error would be if the model forecasts low and the offender turned out high.”

“In consultation with the Durham police, we built a system that is 98% accurate at avoiding this most dangerous form of error – the ‘false negative’ – the offender who is predicted to be relatively safe, but then goes on to commit a serious violent offence,” adds Barnes. “AI is infinitely adjustable and when constructing an AI tool it’s important to weigh up the most ethically appropriate route to take.”

The researchers also stress that HART’s output is for guidance only, and that the ultimate decision is that of the police officer in charge.

“HART uses Durham’s data and so it’s only relevant for offences committed in the jurisdiction of Durham Constabulary. This limitation is one of the reasons why such models should be regarded as supporting human decision-makers not replacing them,” explains Barnes. “These technologies are not, of themselves, silver bullets for law enforcement, and neither are they sinister machinations of a so-called surveillance state.”

Some decisions, says Sherman, have too great an impact on society and the welfare of individuals for them to be influenced by an emerging technology.

Where AI-based tools provide great promise, however, is to use the forecasting of offenders’ risk level for effective ‘triage’, as Sherman describes: “The police service is under pressure to do more with less, to target resources more efficiently, and to keep the public safe. 

“The tool helps identify the few ‘needles in the haystack’ who pose a major danger to the community, and whose release should be subject to additional layers of review. At the same time, better triaging can lead to the right offenders receiving release decisions that benefit both them and society.”

Inset image: read more about our AI research in the University's research magazine; download a pdf; view on Issuu.

Police at the “front line” of difficult risk-based judgements are trialling an AI system trained by University of Cambridge criminologists to give guidance using the outcomes of five years of criminal histories.

The tool helps identify the few ‘needles in the haystack’ who pose a major danger to the community, and whose release should be subject to additional layers of reviewLawrence ShermanRene Böhmer on Unsplash

The text in this work is licensed under a Creative Commons Attribution 4.0 International License. For image use please see separate credits above.

Categories: Cambridge, Cambridgeshire

Young children use physics, not previous rewards, to learn about tools

Cambridge University NewsFeed - Fri, 02/23/2018 - 19:00

The findings of the study, based on the Aesop’s fable The Crow and the Pitcher, help solve a debate about whether children learning to use tools are genuinely learning about physical causation or are just driven by what action previously led to a treat.

Learning about causality – about the physical rules that govern the world around us – is a crucial part of our cognitive development. From our observations and the outcome of our own actions, we build an idea – a model – of which tools are functional for particular jobs, and which are not.

However, the information we receive isn’t always as straightforward as it should be. Sometimes outside influences mean that things that should work, don’t. Similarly, sometimes things that shouldn’t work, do.

Dr Lucy Cheke from the Department of Psychology at the University of Cambridge says: “Imagine a situation where someone is learning about hammers. There are two hammers that they are trying out – a metal one and an inflatable one. Normally, the metal hammer would successfully drive a nail into a plank of wood, while the inflatable hammer would bounce off harmlessly.

“But what if your only experience of these two hammers was trying to use the metal hammer and missing the nail, but using the inflatable hammer to successfully push the nail into a large pre-drilled hole? If you’re then presented with another nail, which tool would you choose to use? The answer depends on what type of information you have taken from your learning experience.”

In this situation, explains, Cheke, a learner concerned with the outcome (a ‘reward’ learner) would learn that the inflatable hammer was the successful tool and opt to use it for later hammering. However, a learner concerned with physical forces (a ‘functionality’ learner) would learn that the metal hammer produced a percussive force, albeit in the wrong place, and that the inflatable hammer did not, and would therefore opt for the metal hammer.

Now, in a study published in the open access journal PLOS ONE, Dr Cheke and colleagues investigated what kind of information children extract from situations where the relevant physical characteristics of a potential tool are observable, but often at odds with whether the use of that tool in practice achieved the desired goal.

The researchers presented children aged 4-11 with a task through which they must retrieve a floating token to earn sticker rewards. Each time, the children were presented with a container of water and a set of tools to use to raise the level. This experiment is based on one of the most famous Aesop’s fables, where a thirty crow drops stones into a pitcher to get to the water.

In this test, some of the tools were ‘functional’ and some ‘non-functional’. Functional tools were those that, if dropped into a standard container, would sink, raising the water level and bringing the token within reach; non-functional tools were those that would not do so, for example because they floated.

However, sometimes the children used functional tools to attempt to raise the level in a leaking container – in this context, the water would never rise high enough to bring the token within reach, no matter how functional the tool used.

At other times, the children were successful in retrieving the reward despite using a non-functional tool; for example, when using a water container that self-fills through an inlet pipe, it doesn’t matter whether the tool is functional as the water is rising anyway.

After these learning sessions, the researchers presented the children with a ‘standard’ water container and a series of choices between different tools. From the pattern of these choices the researchers could calculate what type of information was most influential on children’s decision-making: reward or function. 

“A child doesn’t have to know the precise rules of physics that allow a tool to work to have a feeling of whether or not it should work,” says Elsa Loissel, co-first author of the study. “So, we can look at whether a child’s decision making is guided by principles of physics without requiring them to explicitly understand the physics itself.

“We expected older children, who might have a rudimentary understanding of physical forces, to choose according to function, while younger children would be expected to use the simpler learning approach and base their decisions on what had been previously rewarded,” adds co-first author Dr Cheke. “But this wasn’t what we found.”

Instead, the researchers showed that information about reward was never a reliable predictor of children’s choices. Instead, the influence of functionality information increased with age – by the age of seven, this was the dominant influence in their decision making.

“This suggests that, remarkably, children begin to emphasise information about physics over information about previous rewards from as young as seven years of age, even when these two types of information are in direct conflict.”

This research was funded by the European Research Council under the European Union’s Seventh Framework Programme.

Elsa Loissel, Lucy Cheke & Nicola Clayton. Exploring the Relative Contributions of Reward-History and Functionality Information to Children’s Acquisition of The Aesop’s Fable Task. PLOS ONE; 23 Feb 2018; DOI: 10.1371/journal.pone.0193264

Children as young as seven apply basic laws of physics to problem-solving, rather than learning from what has previously been rewarded, suggests new research from the University of Cambridge.

Remarkably, children begin to emphasise information about physics over information about previous rewards from as young as seven years of age, even when these two types of information are in direct conflictLucy ChekeSharon MollerusDominoes 3

The text in this work is licensed under a Creative Commons Attribution 4.0 International License. For image use please see separate credits above.

YesLicense type: Attribution
Categories: Cambridge, Cambridgeshire

In tech we trust?

Cambridge University NewsFeed - Fri, 02/23/2018 - 09:30

Dr Jat Singh is familiar with breaking new ground and working across disciplines. Even so, he and colleagues were pleasantly surprised by how much enthusiasm has greeted their new Strategic Research Initiative on Trustworthy Technologies, which brings together science, technology and humanities researchers from across the University.

In fact, Singh, a researcher in Cambridge’s Department of Computer Science and Technology, has been collaborating with lawyers for several years: “A legal perspective is paramount when you’re researching the technical dimensions to compliance, accountability and trust in emerging ICT; although the Computer Lab is not the usual home for lawyers, we have two joining soon.”

Governance and public trust present some of the greatest challenges in technology today. The European General Data Protection Regulation (GDPR), which comes into force this year, has brought forward debates such as whether individuals have a ‘right to an explanation’ regarding decisions made by machines, and introduces stiff penalties for breaching data protection rules. “With penalties including fines of up to 4% of global turnover or €20 million, people are realising that they need to take data protection much more seriously,” he says.

Singh is particularly interested in how data-driven systems and algorithms – including machine learning – will soon underpin and automate everything from transport networks to council services.

As we work, shop and travel, computers and mobile phones already collect, transmit and process much data about us; as the ‘Internet of Things’ continues to instrument the physical world, machines will increasingly mediate and influence our lives.

It’s a future that raises profound issues of privacy, security, safety and ultimately trust, says Singh, whose research is funded by an Engineering and Physical Sciences Research Council Fellowship: “We work on mechanisms for better transparency, control and agency in systems, so that, for instance, if I give data to someone or something, there are means for ensuring they’re doing the right things with it. We are also active in policy discussions to help better align the worlds of technology and law.”

What it means to trust machine learning systems also concerns Dr Adrian Weller. Before becoming a senior research fellow in the Department of Engineering and a Turing Fellow at The Alan Turing Institute, he spent many years working in trading for leading investment banks and hedge funds, and has seen first-hand how machine learning is changing the way we live and work.

“Not long ago, many markets were traded on exchanges by people in pits screaming and yelling,” Weller recalls. “Today, most market making and order matching is handled by computers. Automated algorithms can typically provide tighter, more responsive markets – and liquid markets are good for society.”

But cutting humans out of the loop can have unintended consequences, as the flash crash of 2010 shows. During 36 minutes on 6 May, nearly one trillion dollars were wiped off US stock markets as an unusually large sell order produced an emergent coordinated response from automated algorithms. “The flash crash was an important example illustrating that over time, as we have more AI agents operating in the real world, they may interact in ways that are hard to predict,” he says.

Algorithms are also beginning to be involved in critical decisions about our lives and liberty. In medicine, machine learning is helping diagnose diseases such as cancer and diabetic retinopathy; in US courts, algorithms are used to inform decisions about bail, sentencing and parole; and on social media and the web, our personal data and browsing history shape the news stories and advertisements we see.

How much we trust the ‘black box’ of machine learning systems, both as individuals and society, is clearly important. “There are settings, such as criminal justice, where we need to be able to ask why a system arrived at its conclusion – to check that appropriate process was followed, and to enable meaningful challenge,” says Weller. “Equally, to have effective real-world deployment of algorithmic systems, people will have to trust them.”

But even if we can lift the lid on these black boxes, how do we interpret what’s going on inside? “There are many kinds of transparency,” he explains. “A user contesting a decision needs a different kind of transparency to a developer who wants to debug a system. And a third form of transparency might be needed to ensure a system is accountable if something goes wrong, for example an accident involving a driverless car.”

If we can make them trustworthy and transparent, how can we ensure that algorithms do not discriminate unfairly against particular groups? While it might be useful for Google to advertise products it ‘thinks’ we are most likely to buy, it is more disquieting to discover the assumptions it makes based on our name or postcode.

When Latanya Sweeney, Professor of Government and Technology in Residence at Harvard University, tried to track down one of her academic papers by Googling her name, she was shocked to be presented with ads suggesting that she had been arrested. After much research, she discovered that “black-sounding” names were 25% more likely to result in the delivery of this kind of advertising.

Like Sweeney, Weller is both disturbed and intrigued by examples of machine-learned discrimination. “It’s a worry,” he acknowledges. “And people sometimes stop there – they assume it’s a case of garbage in, garbage out, end of story. In fact, it’s just the beginning, because we’re developing techniques that can automatically detect and remove some forms of bias.”

Transparency, reliability and trustworthiness are at the core of Weller’s work at the Leverhulme Centre for the Future of Intelligence and The Alan Turing Institute. His project grapples with how to make machine-learning decisions interpretable, develop new ways to ensure that AI systems perform well in real-world settings, and examine whether empathy is possible – or desirable – in AI.

Machine learning systems are here to stay. Whether they are a force for good rather than a source of division and discrimination depends partly on researchers such as Singh and Weller. The stakes are high, but so are the opportunities. Universities have a vital role to play, both as critic and conscience of society. Academics can help society imagine what lies ahead and decide what we want from machine learning – and what it would be wise to guard against.

Weller believes the future of work is a huge issue: “Many jobs will be substantially altered if not replaced by machines in coming decades. We need to think about how to deal with these big changes.”And academics must keep talking as well as thinking. “We’re grappling with pressing and important issues,” he concludes. “As technical experts we need to engage with society and talk about what we’re doing so that policy makers can try to work towards policy that’s technically and legally sensible.”

Inset image: read more about our AI research in the University's research magazine; download a pdf; view on Issuu.

Fairness, trust and transparency are qualities we usually associate with organisations or individuals. Today, these attributes might also apply to algorithms. As machine learning systems become more complex and pervasive, Cambridge researchers believe it’s time for new thinking about new technology.

With penalties including fines of up to €20 million, people are realising that they need to take data protection much more seriouslyJat SinghDaniel Werbrouck

The text in this work is licensed under a Creative Commons Attribution 4.0 International License. For image use please see separate credits above.

Categories: Cambridge, Cambridgeshire

Study in mice suggests personalised stem cell treatment may offer relief for progressive MS

Cambridge University NewsFeed - Thu, 02/22/2018 - 17:00

The study, led by researchers at the University of Cambridge, is a step towards developing personalised treatments based on a patient’s own skin cells for diseases of the central nervous system (CNS).

In MS, the body’s own immune system attacks and damages myelin, the protective sheath around nerve fibres, causing disruption to messages sent around the brain and spinal cord. Symptoms are unpredictable and include problems with mobility and balance, pain, and severe fatigue.

Key immune cells involved in causing this damage are macrophages (literally ‘big eaters’), which ordinarily serve to attack and rid the body of unwanted intruders. A particular type of macrophage known as microglia are found throughout the brain and spinal cord – in progressive forms of MS, they attack the CNS, causing chronic inflammation and damage to nerve cells.

Recent advances have raised expectations that diseases of the CNS may be improved by the use of stem cell therapies. Stem cells are the body’s ‘master cells’, which can develop into almost any type of cell within the body. Previous work from the Cambridge team has shown that transplanting neural stem cells (NSCs) – stem cells that are part-way to developing into nerve cells – reduces inflammation and can help the injured CNS heal.

However, even if such a therapy could be developed, it would be hindered by the fact that such NSCs are sourced from embryos and therefore cannot be obtained in large enough quantities. Also, there is a risk that the body will see them as an alien invader, triggering an immune response to destroy them.

A possible solution to this problem would be the use of so-called ‘induced neural stem cells (iNSCs)’ – these cells can be generated by taking an adult’s skin cells and ‘re-programming’ them back to become neural stem cells. As these iNSCs would be the patient’s own, they are less likely to trigger an immune response.

Now, in research published in the journal Cell Stem Cell, researchers at the University of Cambridge have shown that iNSCs may be a viable option to repairing some of the damage caused by MS.

Using mice that had been manipulated to develop MS, the researchers discovered that chronic MS leads to significantly increased levels of succinate, a small metabolite that sends signals to macrophages and microglia, tricking them into causing inflammation, but only in cerebrospinal fluid, not in the peripheral blood.

Transplanting NSCs and iNSCs directly into the cerebrospinal fluid reduces the amount of succinate, reprogramming the macrophages and microglia – in essence, turning ‘bad’ immune cells ‘good’. This leads to a decrease in inflammation and subsequent secondary damage to the brain and spinal cord.

“Our mouse study suggests that using a patient’s reprogrammed cells could provide a route to personalised treatment of chronic inflammatory diseases, including progressive forms of MS,” says Dr Stefano Pluchino, lead author of the study from the Department of Clinical Neurosciences at the University of Cambridge.

“This is particularly promising as these cells should be more readily obtainable than conventional neural stem cells and would not carry the risk of an adverse immune response.”

The research team was led by Dr Pluchino, together with Dr Christian Frezza from the MRC Cancer Unit at the University of Cambridge, and brought together researchers from several university departments.

Dr Luca Peruzzotti-Jametti, the first author of the study and a Wellcome Trust Research Training Fellow, says: “We made this discovery by bringing together researchers from diverse fields including regenerative medicine, cancer, mitochondrial biology, inflammation and stroke and cellular reprogramming. Without this multidisciplinary collaboration, many of these insights would not have been possible."

The research was funded by Wellcome, European Research Council, Medical Research Council, Italian Multiple Sclerosis Association, Congressionally-Directed Medical Research Programs, the Evelyn Trust and the Bascule Charitable Trust.

Peruzzotti-Jametti, L et al. Macrophage-derived extracellular succinate licenses neural stem cells to suppress chronic neuroinflammation. Cell Stem Cell; 2018; 22: 1-14; DOI: 10.1016/j.stem.2018.01.20

Scientists have shown in mice that skin cells re-programmed into brain stem cells, transplanted into the central nervous system, help reduce inflammation and may be able to help repair damage caused by multiple sclerosis (MS).

Our mouse study suggests that using a patient’s reprogrammed cells could provide a route to personalised treatment of chronic inflammatory diseases, including progressive forms of MSLuca Peruzzotti-JamettiAndrew cNeuron with oligodendrocyte and myelin sheath (edited)Researcher profile: Dr Luca Peruzzotti-Jametti

It isn’t every day that you find yourself invited to play croquet with a Nobel laureate, but then Cambridge isn’t every university, as Dr Luca Peruzzotti-Jametti discovered when he was fortunate enough to be invited to the house for Professor Sir John Gurdon.

“It was an honour meet a Nobel laureate who has influenced so much my studies and meet the man behind the science,” he says. “I was moved by how kind he is and extremely impressed by his endless passion for science.”

Dr Peruzzotti-Jametti began his career studying medicine at the University Vita-Salute San Raffaele, Milan. His career took him across Europe, to Switzerland, Denmark, Sweden and now to Cambridge. After completing a PhD in Clinical Neurosciences here he is now a Wellcome Trust Research Training fellow.
His work focuses on multiple sclerosis (MS), an autoimmune disease that affects around 100,000 people in the UK alone. Despite having several therapies to help during the initial (or ‘relapsing remitting’) phase of MS, the majority of people with MS will develop a chronic worsening of disability within 15 years after diagnosis. This late form of MS is called secondary progressive, and differently from relapsing remitting MS, it does not have any effective treatment.

“My research sets out to understand how progression works in MS by studying how inflammation is maintained in the brains of patients, and to develop new treatments aimed at preventing disease progression,” he explains. Among his approaches is the use of neural stem cells and induced neural stem cells, as in the above study. “My hope is that using a patient’s reprogrammed cells could provide a route to personalised treatment of chronic inflammatory diseases, including progressive forms of MS.”

Dr Peruzzotti-Jametti is based on the Cambridge Biomedical Campus where he works closely with clinicians at Addenbrooke’s Hospital and with basic scientists, a community he describes as “vibrant”.

“Cambridge has been the best place to do my research due to the incredible concentration of scientists who pursue novel therapeutic approaches using cutting-edge technologies,” he says. “I am very thankful for the support I received in the past years from top notch scientists. Being in Cambridge has also helped me competing for major funding sources and my work could have not been possible without the support of the Wellcome Trust.

“I wish to continue working in this exceptional environment where so many minds and efforts are put together in a joint cause for the benefit of those who suffer.”

The text in this work is licensed under a Creative Commons Attribution 4.0 International License. For image use please see separate credits above.

Categories: Cambridge, Cambridgeshire

Cambridge University and Institute of Cancer Research launch Children’s Brain Tumour Centre of Excellence

Cambridge University NewsFeed - Thu, 02/22/2018 - 16:06

The announcement comes as CRUK announces an investment of an extra £25 million over the next five years into brain tumour research. This is in addition to the £13 million spent each year on research and development of new treatments for the disease.

Cancer Research UK’s funding will support two new specialised centres. The first of these, the Children’s Brain Tumour Centre of Excellence, brings together world-leading experts to discover and develop new treatments to tackle brain tumours in children. A second centre focusing on adult brain tumours will open later in the year.

The Centre will be led by childhood brain tumour expert Professor Richard Gilbertson, Director of the Cancer Research UK Cambridge Centre.

“By creating a hub of expertise for childhood brain tumour research in the UK, we aim to make real inroads to tackling these diseases,” said Professor Gilbertson. “Gathering this expertise together means we can shine a light on the numerous challenges and difficulties that brain tumours pose and discover new treatments to ensure that more children survive their disease.”

The announcement comes as the Health and Social Care Secretary Jeremy Hunt committed an estimated £20 million in funding to tackle brain tumours and deliver a “step change” in survival rates. The funding will be invested through the National Institute for Health Research over the next five years – with the aim of doubling this once new high-quality research proposals become available.

Each year around 11,400 people in the UK are diagnosed with a brain tumour and just 14% of people survive their disease for 10 or more years.

Jeremy Hunt MP said: “While survival rates for most cancers are at record levels, the prognosis for people with brain tumours has scarcely improved in over a generation.

“Our ambition is to deliver a big uplift in the funding of brain cancer research, while galvanising the clinical and scientific communities to explore new avenues for diagnosis and treatment in the future – it is a chance to create a genuine, step change in survival rates for one of the deadliest forms of cancer.”

 Sir Harpal Kumar, Cancer Research UK’s chief executive, added: “Brain tumours remain a huge challenge, with survival barely improving over the last 30 years. Since we laid out our plans to tackle this challenge in 2014, Cancer Research UK has already substantially increased its funding into brain tumours and attracted some of the world’s leading experts to the UK.

“This new funding will mean that we can accelerate these efforts further, by developing a critical mass of expertise in key areas and supporting work along the entire research pipeline to improve survival for children and adults with brain tumours.”

Cambridge leading innovative brain cancer research

Cancer Research UK (CRUK) has today announced funding for a new Children’s Brain Tumour Centre of Excellence, based at the University of Cambridge and The Institute of Cancer Research, London.

By creating a hub of expertise for childhood brain tumour research in the UK, we aim to make real inroads to tackling these diseasesRichard Gilbertson

The text in this work is licensed under a Creative Commons Attribution 4.0 International License. For image use please see separate credits above.

Categories: Cambridge, Cambridgeshire

Fuel Poverty Awareness Day - a helping hand for Cambridge residents

Cambridge Council Feed - Thu, 02/22/2018 - 14:26

FRIDAY 23 February marks Fuel Poverty Awareness Day, when organisations across the country, including Cambridge City Council, will be raising awareness of the problems faced by those struggling to keep warm in their homes, and highlighting initiatives that are in place to tackle the issue.

Around 4 million UK households are in fuel poverty and are unable to afford to live in a warm, dry home. This issue also affects Cambridge residents, with the latest Government figures stating that over 5,000 residents here are in fuel poverty.

Categories: Cambridge, Cambridgeshire

Cambridge and Indian partners launch collaboration to transform India’s "Green Revolution”

Cambridge University NewsFeed - Thu, 02/22/2018 - 09:58

The adoption of modern methods and new technologies in agriculture that propelled India to self-sufficiency in grain production in the second half of the 20th century is known as the country’s “Green Revolution”. It allowed India to overcome poor agricultural productivity, especially in regions like the Punjab and Uttar Pradesh, although it relied on overuse of water, fertilisers and pesticides.

Today, climate change, continuing population growth and the rapid process of urbanisation have put added pressure on India’s ability to feed its population. TIGR2ESS – an acronym for “Transforming India’s Green Revolution by Research and Empowerment for Sustainable food Supplies” – is a £7.8 million programme funded by the UK Global Challenges Research Fund (GCRF) to develop more resilient, equal and diverse food systems in India. It aims to define the requirements for a second more sustainable Green Revolution, and to deliver this through a suite of research programmes, training workshops and educational activities

The TIGR2ESS launch event took place in the context of a three-day workshop that brought together all the UK and India partners to discuss and finalise a plan for the programme’s effective implementation.

TIGR²ESS will support 14 postdoctoral researchers employed at partner research institutions and universities across India, as well as eight post-doctoral research associates from collaborating institutions in the UK

The programme will create 3-year research opportunities for a total of 22 early-career researchers in the UK and India, and also promote academic exchanges at all levels in laboratories across India and the UK.

One of TIGR²ESS’ objectives is to foster mutually beneficial knowledge exchange and collaborative research through workshops in Cambridge and India. In addition, it will deliver a programme of outreach, education and entrepreneurship. In doing so, TIGR²ESS will help strengthen Indian research capacity in key areas of the food system, and will contribute to the development of smart agriculture in India.

At the heart of the TIGR2ESS proposal are a series of Flagship Projects tackling fundamental research questions, and addressing the associated social issues facing farmers in the context of increasing urbanisation and climate change.

Professor Stephen Toope, Vice-Chancellor of the University of Cambridge, said: “TIGR²ESS will inform best practice in crop development and growth. It will allow greater genetic understanding of crop resilience to drought and disease. It will contribute to more effective use of scarce water supplies. It will build capacity and foster education.”

“It will empower women and entrepreneurs, and encourage innovation along the food supply chain. It will create opportunities for early-career researchers, and in doing so will contribute to India’s efforts to ensure it is able to meet the needs of its growing population. I am delighted that Cambridge is a part of this extraordinary initiative.”

Professor Ashutosh Sharma, Secretary of India’s Department of Science and Technology and Department of Biotechnology, added: “"India is a diverse country, and negotiating this diversity is the key to developing any interventions. The TIGR²ESS programme takes into account this diversity, and that will define its success. We need to take a holistic view at the nexus between agriculture, environment, water, climate, energy and health. Assessing the impact of technology applications or interventions in a larger setting is very important."

Presenting TIGR²ESS, the University of Cambridge’s Professor Howard Griffiths, the programme’s principal investigator, said: “This unprecedented programme of joint activities will enable capacity building both in the UK and India, and shape the policy needed to define a second Green Revolution for India.”

“TIGR²ESS will address the challenges identified by our colleagues in India, and translate research outcomes to build agriculture systems that support sustainable livelihoods, enhancing the well-being and health of rural communities with a particular focus on improving the opportunities for equality, female empowerment and youth employment, and market-led entrepreneurial opportunities.”

Daniel Shah, Director, Research Councils UK (RCUK) India, said “TIGR²ESS is a great example of the UK and the Indian research teams partnering to address issues around food security and agriculture systems. This initiative also aligns with Indian Prime Minister Narendra Modi’s vision to double farmers’ income by 2020."

Researchers met in New Delhi today to formalise the launch of a programme that aims to jointly address some of India’s most pressing food security challenges.

This unprecedented programme of joint activities will enable capacity building both in the UK and India, and shape the policy needed to define a second Green Revolution for India.Prof Howard Griffiths

The text in this work is licensed under a Creative Commons Attribution 4.0 International License. For image use please see separate credits above.

Categories: Cambridge, Cambridgeshire

Stroke survivors and caregivers feel abandoned by health services, study finds

Cambridge University NewsFeed - Wed, 02/21/2018 - 19:00

The study, by researchers at the University of Cambridge, suggests that primary care and community health care interventions which focus on improving active follow-up and information provision to patients and caregivers, especially in the first year after stroke, could help improve patient self-management and increase stroke-specific health literacy.

Globally, stroke is the second leading cause of death.  Stroke-related disability burden is on the rise with a 12% increase worldwide since 1990, and contributes to the large economic burden of stroke due to healthcare use, informal care and the loss of productivity. The annual cost of stroke, including health care cost, medicines and missed days of work, is estimated at $33 billion in the USA and £8.9 billion in the UK.

Primary care could play an important role in the care of stroke survivors and their caregivers, supporting access to community services and facilitating transfer back to specialist services when new problems emerge. It could also help provide training, and identify and address health needs of caregivers. However, the feeling of abandonment that people with stroke experience following hospital discharge suggests this role is not being fulfilled.

To better understand the possible reasons behind this feeling of abandonment, a team at Cambridge’s Department of Public Health and Primary Care carried out a systematic review of qualitative evidence in the field. In total, they analysed 51 studies (encompassing 566 stroke survivors and 593 caregivers). Their results are published today in the journal PLOS ONE.

The analysis found an unaddressed need for continued support in a quarter of studies. Survivors and caregivers felt frustrated and dissatisfied with a lack of proactive follow-up either from primary care, the hospital, or allied healthcare professionals. This led to feelings of dissatisfaction, uncertainty, that a stroke survivor was “forgotten and written off” and that their general practice did not care about them.

Lack of support for caregivers was reported in more than one in five studies (22%), even though they felt healthcare professionals assumed that they would provide the majority of care needed. They felt ill prepared and pressured to “become experts” in caring for stroke survivors. In addition, both survivors and caregivers felt emotional support was lacking, even though they are at risk of anxiety and depression.

Long waiting times for assessment and rehabilitation and little or no help from social services left survivors feeling “left in the lurch”. Caregivers felt that access to rehabilitation was not provided early enough, causing survivors to “go backwards”.

More than two out of five (41%) of studies highlighted gaps in information provision. Opportunities for support could be missed due to the lack of knowledge of what services were available. The lack of information about local services and how to find them was confusing and prevented access. Many caregivers and survivors had to find out information by themselves from the internet, friends and other caregivers. When information was provided, it was often inconsistent and covered only some services.

A quarter (23%) of the studies highlighted inadequate information on stroke, its consequences, and recovery. Information presented too early after stroke disempowered stroke survivors and caregivers, leading to feelings of confusion, fear and powerlessness. Survivors and caregivers wanted specific information on the significance of post-stroke symptoms and how to manage them. Lack of information led to unrealistic expectations of “getting back to normal”, leading to disappointment and tensions between the survivor and caregiver.

Ineffective communication between survivors, caregivers and healthcare services as well as within healthcare services resulted in feelings of frustration and having “to battle the system”. Gaps in the transfer of knowledge within the healthcare system and the use of medical jargon sometimes caused confusion and were construed as indifference to survivors’ needs.

“Patients and caregivers would benefit from active follow up and information provision about stroke that is tailored to their specific needs, which change over time,” says Professor Jonathan Mant, who led the study. “People take active efforts to find information for themselves, but navigating and appraising it can be challenging. What is needed is trustworthy information written in an accessible language and format, which could support better self-management.”

The study found that that many stroke survivors and caregivers felt marginalised due to the misalignment between how healthcare access in primary care is organised and survivors’ and caregivers’ competencies. For example, individuals felt that in order to access services they needed an awareness of what services are available, plus the ability to communicate effectively with healthcare professionals. This situation can be compounded by cognitive, speech and language problems that can further affect a patient’s ability to negotiate healthcare access.

“Stroke survivors and their caregivers can feel abandoned because they struggle to access the appropriate health services, leading to marginalisation,” says Dr Lisa Lim, one of the study authors. “This arises because of a number of factors, including lack of continuity of care, limited and delayed access to community services, and inadequate information about stroke, recovery and healthcare services.

“We need mechanisms to encourage better communication and collaboration between generalist services, which tend to provide the longer term care after stroke, and specialist services, which provide the care in the immediate phase post-stroke.”

The researchers argue that providing support from healthcare professionals within the first year after stroke would increase patients’ ability to self-manage their chronic condition. This can be achieved by providing timely and targeted information about stroke, available resources, and by regular follow-ups to foster supporting long-term relationships with healthcare professionals.

“Giving the right information at the right time will help stroke survivors and their caregivers become more self-reliant over time and better able to self-manage living with stroke,” adds Dr Lim.

The team identified two key areas of improvement to address patients’ and caregivers’ marginalisation: increasing stroke-specific health literacy by targeted and timely information provision, and improving continuity of care and providing better access to community healthcare services.

Pindus, DM et al. Stroke survivors’ and informal caregivers’ experiences of primary care and community healthcare services - a systematic review and meta-ethnography. PLOS ONE; 21 Feb 2018; DOI: 10.1371/journal.pone.0192533

A systematic review of studies focused on stroke survivors’ and carers’ experiences of primary care and community healthcare services has found that they feel abandoned because they have become marginalised by services and do not have the knowledge or skills to re-engage.

Stroke survivors and their caregivers can feel abandoned because they struggle to access the appropriate health services, leading to marginalisationLisa LimKate Whitley (Wellcome Images)Blood pressure measurement - close-upResearcher profile: Dr Lisa Lim

As well as being a researcher in the Department of Public Health and Primary Care, Dr Lisa Lim is also a GP. Her experience with patients helps inform her work.

“My research is with stroke survivors, looking at how we can improve things for them after stroke as well as preventing further strokes,” she says. “We know that stroke survivors and their carers often struggle after they have been discharged from specialist services and their needs are not always identified or addressed by healthcare services; this is what we want to change. This is a problem I see in my clinical practice and I know how important it is to these patients.”

Working in collaboration with researchers at the University of Leicester, Dr Lim and the team at Improving Primary Care after Stroke (IPCAS) have spent the past two years developing and piloting a primary care intervention for stroke survivors. The intervention is now ready to be trialled and they are currently recruiting GP practices and patients.

Dr Lim says she hopes her work will demonstrate how important it is that we continue to invest in primary care research and how primary care can help people to live well with a chronic problem like stroke – “It can make a massive difference to peoples’ lives,” she says.

“It may not be considered by some to be the most glamorous research,” she adds. “We will not be ‘curing’ stroke, but what we are trying to do is make a big impact on the day-to-day lives of people affected by stroke.”

The text in this work is licensed under a Creative Commons Attribution 4.0 International License. For image use please see separate credits above.

YesLicense type: Attribution
Categories: Cambridge, Cambridgeshire

International experts sound the alarm on the malicious use of AI in unique report

Cambridge University NewsFeed - Wed, 02/21/2018 - 07:57

Read more about the findings and download the report here.

Twenty-six experts on the security implications of emerging technologies have jointly authored a ground-breaking report – sounding the alarm about the potential malicious use of artificial intelligence (AI) by rogue states, criminals, and terrorists.

For many decades hype outstripped fact in terms of AI and machine learning. No longer.Seán Ó hÉigeartaigh

The text in this work is licensed under a Creative Commons Attribution 4.0 International License. For image use please see separate credits above.

Categories: Cambridge, Cambridgeshire

Many highly-engaged employees suffer from burnout

Cambridge University NewsFeed - Wed, 02/21/2018 - 00:00

Whereas lack of engagement is commonly seen as leading to employee turnover due to boredom and disaffection, the study finds that companies, in fact, risk losing some of their most motivated and hard-working employees due to high stress and burnout – a symptom of the “darker side” of workplace engagement.

It is concerning, concludes the study by academics working in the UK, US and Germany, that many engaged employees suffer from stress and burnout symptoms, which may be the beginning of a pathway leading into disengagement.

“Nearly half of all employees were moderately to highly engaged in their work but also exhausted and ready to leave their organisations,” said co-author Dr Jochen Menges from the University of Cambridge. “This should give managers a lot to think about.”

The study, published in the journal Career Development International, examined multiple workplace factors that divide employees into various engagement-burnout profiles. These include low engagement-low burnout (“apathetic”), low engagement-high burnout (“burned-out”), high engagement-low burnout (“engaged”), “moderately engaged-exhausted”; and “highly engaged-exhausted”.

While the largest population at 41 percent fit the healthily “engaged” profile, 19 percent experienced high levels of both engagement and burnout (“highly engaged-exhausted”) and another 35.5 percent were “moderately engaged-exhausted”.

The highest turnover intentions were reported by the “highly engaged-exhausted” group – higher than even the unengaged group that might be commonly expected to be eyeing an exit.

“These findings are a big challenge to organisations and their management,” said Menges, who is a Lecturer in Organisational Behaviour at Cambridge Judge Business School. “By shedding light on some of the factors in both engagement and burnout, the study can help organisations identify workers who are motivated but also at risk of burning out and leaving.”

While previous studies had looked at engagement-burnout profiles, the new study – conducted at the Yale Center for Emotional Intelligence, in collaboration with the Faas Foundation – also focuses on demands placed on employees and resources provided to them in the workplace, and how these affect engagement and burnout.

The study is based on an online survey of 1,085 employees in all 50 US states. It measured engagement, burnout, demands and resources on a six-point scale ranging from such responses as “never” to “almost always” or “strongly agree” to “strongly disagree”.

For engagement, questions included “I strive as hard as I can to complete my job” and “I feel energetic at my job”. For burnout, participants were asked how often at work they feel “disappointed with people” or “physically weak/sickly”. Demand questions included “I have too much work to do”, while resources were measured by questions such as “my supervisor provides me with the support I need to do my job well”.

The researchers then examined overlap of these various factors, and how they interact and influence each other, in order to draw conclusions about the different profile groups.

“High engagement levels in the workplace can be a double-edged sword for some employees,” said Menges. “Engagement is very beneficial to workers and organisations when burnout symptoms are low, but engagement coupled with high burnout symptoms can lead to undesired outcomes including increased intentions to leave an organisation. So managers need to look carefully at high levels of engagement and help those employees who may be headed for burnout, or they risk higher turnover levels and other undesirable outcomes.”

Julia Moeller et al. ‘Highly engaged but burned out: intra-individual profiles in the US workforce.’ Career Development International (2018). DOI: 10.1108/CDI-12-2016-0215

Underlining the danger of job burnout, a new study of more than 1,000 US workers finds that many employees who are highly engaged in their work are also exhausted and ready to leave their organisations.

These findings are a big challenge to organisations and their management.Jochen Menges Glenn Carstens-PetersKeyboard warrior

The text in this work is licensed under a Creative Commons Attribution 4.0 International License. For image use please see separate credits above.

YesLicense type: Public Domain
Categories: Cambridge, Cambridgeshire