DFA Technology FAQs
This information serves to offer a fuller explanation of the DFA technology. Questions are grouped logically but answers will necessarily overlap to some degree and repetition of some points is unavoidable in providing a sound reply to a given question.
Whilst long-standing, the technology is an evolving one. New questions will also continue to be asked. Thus, this FAQ sheet will be maintained current on the website below.
DFA is a standalone monitoring device installed in the circuit substations of medium and high voltage distribution overhead lines and cables. It is designed to provide direct, real-time line condition information to aid field maintenance, refurbishment and replacement decision-making, safety and bushfire risk assessment, in the event of developing issues with the circuit components and what the nature of that issue is. Warnings are passed to the asset owner accordingly, whilst the captured information may be reviewed in greater depth by the system operator to assist with managing the event as efficiently as possible. A characteristic of this technology is that it can determine the occurrence of developing issues prior to these being known to the asset owner by conventional means.
The DFA Technology System consists of two main components, DFA Devices and DFA Master Station, both necessary to the basic use of the DFA Technology System. The following high-level schematic illustrates the relationship between these components. Customer is responsible for all items shown in the diagram other than the DFA Master Station and DFA Devices. Each DFA device carries out full assessment of detected disturbances and provides buffered storage of the incoming data. The DFA Devices are connected via the internet to the Master Station. The Master Station is currently a cloud-based device which provides the secure conduit and main data repository between the DFA and the customer.
High-Level Schematic of the DFA Technology System
Origin and Pedigree of the Technology
DFA has been about for about 20 years and has established many thousands of circuit years of both normal and abnormal operational condition experience over this period.
DFA technology was commissioned by the USA power industry to specifically address the real issues of fire risk reduction and a desired improvement in MV line management. The technology was never simply a research project with no end use identified at the outset. DFA was developed by the electrical engineering department of Texas A&M University (‘TAMU’) in Texas USA. Research and field demonstration have been done over the years by TAMU and EPRI (Electrical Power and Research Institute, USA), in close cooperation with multiple utility companies. Some seven patents apply to the technology.
DFA has been field demonstrated for over a thousand circuit-years. There are currently several hundred units deployed into the USA market with 66 specifically dedicated to the Texas Wildfire Mitigation Project. In addition Lord Consulting have units deployed across a number of utilities within Australia and New Zealand. Deployment projects are also undergoing planning for Australia, New Zealand and the UK. Approximately 20 distribution utilities in the USA have DFA systems in active operational duty.
This is a very important question. The DFA was conceived in its fundamental system architecture to be a field device that could always be updated as refinements were to hand based on new learning’s and field experience. DFA technology embodies over 20 years of field-proven performance based upon field-sourced events correlated with actual issues. No part of the development occurred by simple ‘theoretical’ assumptions on how real-world circuit events might look electrically. Thus, the DFA was developed with the fullest of customer cooperation and this model, as the technology is now being deployed internationally on an exponential numerical basis, will be enhanced and refined for its lifetime by regular algorithm and data assessment technology as this comes to hand from the exponential accumulation of field-verified real-world events.
In other words, DFA technology is mature but by design, will continue to evolve in sophistication via its totally unique concept of the planned and systematic integration of international deployment experiences. This concept is causing huge excitement internationally as it is an unprecedented embodiment of Industry co-cooperation on this key asset management requirement.
No other manufacturer is offering a technology which delivers the same breadth of capability and outcomes as DFA and it remains a unique concept in the market.
Yes…A full suite of detailed papers and case studies on the DFA technology in operation are available at www.lordconsulting.com/DFA-HIZ
DFA Technology System Capability
No technology is a panacea for all line issues, but DFA technology provides a paradigm shift in performance management for MV and HV systems. It can detect a variety of circuit failures, cable failures pre-failures, and other events. It has documented electrical signatures related to all the events outlined in TABLE 1. It features automated, mature algorithms for characterising some types of failures. Other algorithms continue to be developed and deployed. A positive improvement in cumulative line reliability statistics (‘SAIDI’ and ‘SAIFI’) and reduction in bushfire risk is likely from the technology.
Yes, things like recording operation and associated high resolution waveforms of reclosers that are not SCADA connected, motor starts, normal operation of switches and capacitor banks, power quality data (V, I, real / reactive power, power factor). The DFA essentially reports what is happening on the circuit, whether this is a developing fault which has not yet tripped a recloser, circuit breaker or other protection systems, or whether this is a full blown fault which has tripped the circuit. The DFA is most effective when it has been an integral part of both the engineering and operational fault management process.
Yes, there are many levels of contribution offered by the DFA:
- Firstly within engineering for developing issues which are identifiable by the DFA, e.g. clamps, insulators, vegetation, animal, etc. The information provided (if gathered in a timely manner) may be used to proactively locate and repair the failure prior to the issue causing a system outage.
- Secondly, for system issues which are not yet fully identifiable, but require monitoring should they escalate (i.e. repetitive insipient events), so that it is possible to react quickly should the event escalate or to identify that it has been cleared by some proactive maintenance work.
- Thirdly in operations once an event has become a full blown fault,. The locational, magnitudinal and characteristic information that is gained from the DFA during an event is used by the Operational Engineers and Fault Teams to drive the decision making process for the management of that fault.
- Fourthly, as an asset management tool alongside visual inspection data to identify the most optimised maintenance expenditure approach for the circuit and to confirm that system performance has been successfully improved as a result of that work being carried out.
Refer to the capability profile.
Refer to attached ‘Addressing Industry Concerns’ information sheet.
The DFA technology is unique and unlike any other technology in current use so should not be viewed as similar to any other technology. DFA connects to conventional CTs and VTs, as do traditional monitoring devices, but it senses these inputs much more sensitively and then applies proprietary analysis to the sampled data to detect subtle events that may indicate developing problems that are not even recorded by conventional technologies. Also for this reason, data recorded by other technologies is not adequate for specialized DFA analysis. Because DFA records even subtle anomalies, it necessarily records much more data than conventional technologies. The large data volume can be managed because DFA relies on automated algorithms, rather than human effort, to report system events and health. DFA also offers selective and customer-specified reporting of events noted, be they determined by the DFA to be ‘normal’ or ‘abnormal’ events.
It should be noted that the DFA system is also capable of providing numerous conventional functions, such as logging of conventional power system quantities and other functions. Whilst certain functions may be covered by myriad discrete technologies, the DFA system advantageously provides these multiple functions in a single, unified platform and database. The integration of so many unique processing and data assembly capabilities in the DFA system make its contribution unrivaled.
No. It is not conceived as SCADA per se and was purposefully intended to run outside SCADA but as an adjunct to it. It operates as an independent web-accessed intelligent MV line monitoring system with alarms being activated by assessed line condition abnormalities and passed to the network and operation team via email, SMS, or web. DFA reports operations of station breakers, as does SCADA, but DFA also reports operations of remote reclosers and capacitors, including those without communications, by using sophisticated digital techniques to detect these operations from waveforms it measures from conventional CTs and VTs. The customer may of course choose to import DFA-acquired data into SCADA for statistical record if one wishes.
The DFA is not conceived as a standalone power quality analyser but embraces that function as a subset of its fuller capabilities. Power quality analysers primarily focus their interest upon voltage irregularities, often have set points for triggering recording or alarms, and do not make interpretations of the issues, simply recording data for later analysis by those trained in the art. The DFA technology, by converse, focuses primarily on intelligent real-time assessments of incoming circuit data of which current signatures predominate as that is where the majority of circuit abnormalities are noted, but certainly it also assesses voltage in its determinations.. DFA technology diagnoses line events via proprietary digital signal processing and reports actual interpreted findings only. The technology allows ‘high fidelity’ recordings showing minor signatures, that not being possible with a power quality device. DFA devices will record virtually all disturbances recorded by PQ devices but also will record issues that initially cause too little variation to be detected by PQ devices. The Thus, whilst appearing to be related devices, the principle of analysing and characterising waveforms is quite different.
No. DFA was conceived to work alongside protection systems but has a totally different focus and intent and nor does it offer trip functionality. Whilst sophisticated in their operation, modern protection relays still function primarily to clear present faults. Conversely, DFA reports assessed and qualified data, sometimes with probability functions attached.
The technology has not yet been tested on SWER circuits as they are not used in the USA. The designers believe the basic concepts are universal but that peculiarities of SWER circuits likely will require adaptation to detection algorithms. It is critical to note that the system design of the DFA system provides for the ability to update algorithms seamlessly, via Internet, as needed.
The DFA, unlike other proprietary fault management systems such as GFN (which actively stress the network past the normal point of failure to prove asset fitness), does not initiate faults or defects. It only reports what is actually determined to be happening with the installed network, and particularly what might be viewed as matters of actual or pending concern. The DFA merely monitors the system for the earliest signs of a developing electrical defect or fault and reports back to the utility in plain English. The defect or fault is typically on a natural glideslope to failure and left un-actioned (as with a circuit without DFA technology installed), will result in the eventual failure of the asset and system, possibly the loss of customer supplies, wildfire ignition or other hazardous situation. The DFA in no way affects the speed at which this defect or fault deteriorates, not does it prevent the defect from directly failing. The DFA assists the utility by either providing an early warning that there is an escalating defect, or providing valuable information to locate a system fault more quickly following failure. The DFA also identifies when defects or events are of a repetitive nature, allowing the utility to gauge whether the defect is escalating or stable.
DFA, similar to passive and injection Resonant Earthing systems detect the initial onset of fault currents. The principle of Resonant Earthing Systems (RES) however is that it is a reactive fault management system, operating after the fault has reached a level at which on-going normal circuit operation is not possible. In many cases, conventional protection systems are required to be de-sensitised to allow for the triggering and successful operation of RES systems prior to circuit trip / lockout. RES are also only effective on Phase to earth faults, with most Phase to Phase faults resulting in an immediate circuit trip / lockout.
DFA however operates at a much more sensitive level, detecting both phase to phase and earth faults significantly earlier than any of these other technologies. Although no single technology can be the panacea for all electrical failure events, it is the predictive ability of DFA which aligns so uniquely well with the reactive management capabilities of RES. In particular, for wild fire prone areas, a mix of both DFA and RES provides the optimal capability for the earliest detection of developing Phase to Phase and Phase to Earth electrical breakdown, whilst the RES provides security in managing those failures which develop at a rate which is too rapid to manage within the pre-failure window.
From the high level, to be compatible with DFA processing and algorithms, the external equipment would need to write files indefinitely at 24bits of resolution, record 200A of fault current at the device and be able to trigger on changes in magnitude of 0.5A (primary) earth fault current.
We understand and appreciate the attractiveness of using data from existing devices, rather than adding another box to each circuit. On the surface it sounds like a great question, “We have all of these fantastic new relays and PQ meters on the system, just not the intelligence of the DFA. So why can’t we just take the DFA intelligence and use it with the equipment that we’ve got?” In answering the question, one needs to really understand what the problem is, and it comes down to the fundamental purpose of what each instrument is designed to do. Basically, although they use the same VT’s and CT inputs, PQ Meters and Relays are designed to operate in a fundamentally different fashion from DFA. The waveforms they produce are not compatible with DFA parameters, processes or algorithms.
Specifically, PQ meters are typically designed and programmed to trigger only on large changes in voltage (e.g., a sag to 95%). Protection relays typically only record events which produce a relatively high magnitude of current (e.g., initiate their protection cycle, even if the device ultimately does not operate). Almost without exception, and by design, this produces a relatively small number of events, as measured at the substation. These events are typically limited to a few core causes, most notably conventional faults and perhaps capacitor switching events. Invariably these devices do not pickup subtle waveforms that may indicate a device is entering an incipient failure state – for example series and shunt arcing events. DFA however is designed to meet achieve the above specifications and then to be able to triage the large number of recordings to identify those events which are abnormal, repetitive or important. All of this is achieved in a matter of seconds.
This distinction is critically important, because it means that in most cases, PQ and Relay devices used as an input for waveform analysis either do not trigger at the thresholds required for detecting incipient or series arcing events, or do not record the specific information that the DFA uses to analyse the waveforms to predict the failures. The important point to stress here, because it is essential for successful waveform analytics, is that if your existing devices aren't capturing ALL events of interest (generally because they are not designed to, and are not able to be programmed to do so), then it doesn't matter what framework you use, DFA or any other platform. If you don't have the recordings, you cannot, by definition, run any waveform analysis on them.
There is also a misconception that all devices record a given waveform in more-or-less the same way. We are often asked if it were indeed possible to reprogram a Relay or PQ meter sensitivity to capture the specific waveforms required for DFA analysis, could they then be used by DFA for this purpose? Unfortunately the answer is generally “no” and Power Solutions personnel have authored an academic research paper on this topic, which is available through the link below.
Fundamentally, each waveform recording device has its own “fingerprint” associated with its recordings, based on its analogue and digital hardware, and its software programming. We often are asked for parameters such as sample rates and bits of resolution as you have asked in this case. Those are both part of a device’s “fingerprint” but many additional factors come into play e.g. analogue or digital filtering applied in the device, the maximum record length of the device, the amount of noise introduced by the electronics in the device, etc.
The critical point is that the design of algorithms and waveform analytics, particularly of the kind required for advanced event analysis and classification, requires an understanding of the specific analogue and digital processing parameters of the device which captured the original waveform being classified. Said differently, if the same event were recorded by two separate devices (e.g., an SEL-351 and a Dranetz 61000), and the same analysis algorithms were run on the outputs from those two devices, there is no guarantee that the outputs would match. This means that using signals from a variety of devices will probably work sufficiently well for analysing and locating large conventional faults, but analysing and characterizing more subtle events, series arcing, repeat events and those events which the DFA can identify to a particular source, requires knowing and accounting for the peculiarities of individual waveform recording devices. This is important to understand, because it fundamentally affects the accuracy and operation of the DFA outputs. This is why we do not allow for the import of other device waveform recordings into the DFA for analysis.
One final point. Even if all of the above could be overcome to allow for the DFA algorithms to operate to a satisfactory level of accuracy from multiple device sources, the DFA has been physically designed to take established analogue inputs and then to analyse, record and transmit relevant information to the Master Station. The design of the DFA does not facilitate the separation of the data capture, recording and analysis functions. Essentially, in its current form the DFA does not have the capability to import waveforms from other devices in isolation from its existing hardware. Such a change would be major design revision and not something which could be undertaken easily, cheaply or quickly.
How it Works
DFA devices, applied on a per-circuit basis at substation, continuously digitize waveforms from conventional CT’s and VT’s. Waveforms are recorded when anomalies are detected, even small anomalies that are ignored by conventional technologies. Advanced digital processing and patented analytical techniques, prepared from a library of over 1 million referenced issues recorded to date, are applied to each recorded waveform, with the intent of determining the power system event that caused the waveform anomaly. The techniques employed are biased so as to minimize false alarms. The substation DFA device then reports diagnosed events to a central master station, which then makes the information available to personnel, thus increasing their situational intelligence regarding the condition of the power system. The system architecture is inherently adaptable and upgradable as refinements are implemented.
Sampling rate is often asked about, sometimes in conjunction with the order of harmonics that can be resolved. The DFA sample rate is 256 points per cycle, which equates to 12,800 samples per second per channel on 50Hz systems, and is sufficient to capture the waveform data and characteristics of interest, although sample rate is not considered the most important aspect of the DFA’s data.
Also often asked is the number of bits of the DFA’s A/D converters. To answer this question properly, one must distinguish the A/D converter’s raw number of bits from its number of effective bits. An A/D converter can have a high raw bit count but, if its associated analogue circuitry is electrically noisy, a substantially smaller effective bit count, after the noise bits are considered and discarded.
Unfortunately most waveform recorders’ spec sheets provide only raw A/D bit counts and do not consider or reveal effective bits. Effective bits, also referred to as small-signal resolution, is necessary for detecting certain incipient failures, and DFA provides 18+ effective bits of resolution to make this possible. It should be noted that systems claiming to remove noise digitally, after conversion from analogue form to digital, may improve effective resolution for steady-state signals but cannot improve effective bits for transient signals, such as those important to detection of certain incipient faults and other transient conditions.
Results have been successfully recorded on line lengths of at least 200 km from the connection point to the line.
Each circuit will produce a certain level of normal and abnormal ‘disturbance data’, which varies immensely from circuit to circuit. Some can produce a dozen disturbances a month, whilst others can produce several hundred. This has no direct bearing on circuit performance as some disturbances are perfectly normal operational occurrences (switching, capacitors, motor starts, etc.). In times of storms, these numbers of disturbances can increase to several thousand over a matter of days, especially if there are instances of vegetation being blown intermittently into circuits. The DFA can typically store an average of 4-6 weeks of normal data, which over a storm event can drop to a matter of days. Whilst the DFA is connected data is transmitted and then overwritten. In the event of a communications failure, data is likewise overwritten oldest to newest first. No situations have yet been encountered whereby the comms has been down for long enough to present a loss of recorded event data.
The DFA technology will typically report findings quickly in a simply-read code of characterised issues, along with associated timings. Reports are simply-read and decoded to allow a clear understanding of the issue(s) immediately. Through the decoding process and the use of the network’s own line modeling software one can determine the likely site of many reported events. If one so wishes to view it, associated waveform data is also saved with abnormal events and can be retrieved, although it is important to comment that this data generally is NOT required for a normal assessment or response to a reported event.
The DFA does not provide specific distance to fault, but for some events, does provide system event parameters such as recloser timings, magnitude, phase/ground identification, kVa and load information, as well as probable equipment details which can be used to determine the most likely location of an event on the circuit. It is also planned that an impedance to fault calculation will be included within the DFA’s capabilities within the near future.
Within an electricity system, there are no such things as a ‘false positive or negative’. There are only normal operational conditions and abnormal operational conditions which develop in amplitude and intensity at differing rates over differing periods of time. The speed of which these events develop from first indications to the point at which the circuit can no longer support normal operations (circuit protection trip event or fault) is dependent on loading, weather, mechanical and other local factors).
The DFA was designed at the outset not to report ‘false positives’…i.e. information suggesting there is an issue of concern when in fact there was no such issue occurring on the circuit. It has achieved this outcome in its many years of field-proven deployment and has done so by way of the rigour of its algorithm and signal processing technology based on these also being designed from accumulated examples of actual real-world events. In order to report an event the DFA must see a signal above a minimum level of activity and this will vary depending on several factors, but suffice it to say that its design and field experience to date has determined that below such levels of determined/reported activities, the events have not been found to have been significant. This also begs the question of the DFA’s history at missing events/information that were significant. This is a circular argument as one never would know what one did not know but the reality is that no reports have yet been made that the DFA did not see something that was patently occurring with demonstrable electrical signals and which had been determined by other means to have been the case. One final point has to be made is that the sensitivity of the DFA is typically in the 1-2 amp area of adverse modulation on circuits which can be carrying hundreds of amps of load current (which of course precludes any claims of the DFA being a partial discharge monitor or the like) but the warnings of issues, when seen, have been found to be suitably timely.
As part of the offered support for the DFA-Plus Devices, a consultant, skilled in the power distribution industry asset management field, will be selected from the LORD Consulting team and allocated to a given client by way of a formal assignment and introduction. Fundamentally, the consultant is intended to deliver the outcome of an effective implementation of the technology, such as to ensure it stays relevant to the client’s business and is expanded progressively in its implementation as a direct result.
Lord Consulting Services include:
- Articulate as practicable the client’s current and historic circuit issues, patterns of circuit expenditure & reliability (such things as Reliability (annual Quality of Supply spend], % Opex spent on faults, fires started, and associated statistics (including line and overall SAIDI and SAIFI).
- Form an agreed basis with the client as to what are the expectations for the DFA technology
- Agree basis for later comparison measurement of DFA contribution over historical line performances
- Assess operationalising requirements including (in no set order of priority):
- change management
- operational processes
- training needs at all levels of the organization
- deployment profile at the outset
- technical matters relating to deployment (site by site, but including verification that CT ratios are suitable for permitting optimal low level signal detection by the DFA)
- planning and agreeing monitoring and associated operational response strategies for the installed devices
- defining and agreeing role and input methodology of PSI and LORD Consulting team in the process
- agreeing all perceived and likely requirements for the DFA and its wider capabilities, including any special applications or contributions (e.g.: line design verification, reliability of components, and suitability of components to special environmental matters like salt spray or dust).
- agreeing processes for maintaining operational and firmware currency of devices
- determination and agreement of manner by which data from DFA may be integrated to all relevant areas of the client’s business
- determination and agreement on first year (and later years) milestones & how they will be measured. This will certainly include things like line SAIDI and SAIFI on monitored circuits, comparison with earlier results, and a parallel quantified ‘balance sheet’ of ‘unlost SAIDI minutes’ and prevented SAIFI events.
- Work alongside client to implement the Year One plan based on the above assessment criteria, including: focus on the processes; contributions of the DFA to the operational, asset management, and commercial sides of the client’s business; training needs and responses; interpretation of DFA by client and via LORD/PSLLC interfaces; and the quality, degree, and contribution of the direct PSLLC USA interface with the client, including on any special research areas sought by the client.
- Continue to work with each client to hone the agreed plan for the technology rollout as it evolves in cultural, uptake, and contribution levels. This is felt to best be done annually in a formal meeting with the client at an interval agreed at the outset, most likely at the time of budgeting for the client’s next financial year, to:
- Verify that the commitment to DFA by the client remains strong in principle and practice.
- Confirm that the intended applications of, and expectations for, the technology have been met, to what degree (refer 8,11 above)
- Review the documented contributions of the technology over the year (refer [e] above).
- Document general satisfaction levels and areas of attention required
- Review levels of training in DFA operation and response by client and assess any training or support needs in that respect
- Consider action points to address any matters arising from the above meeting
- Plan for the on-going utilization and implementation of the technology as it evolves in cultural, commercial, and uptake levels at each client. This would include the assessment of new applications or dynamics of the Industry (e.g.: directions set by AER, commercial fortunes of client et al) not previously considered or adopted but which might now be worth reconsideration, requirement and associated budget for more devices for coming year, and commercial business case required for the latter.
- Assist client with possible additional business case requirements to support a more extensive DFA implementation in the coming year
- Convey agreed findings of the above review [h] (with timelines or any actions) to client in writing and arrange client sign off of the outcomes and action points arising by the start of each calendar year.
- Review relevant content of annual client review with the LORD Consulting and product supply team leaders and PSLLC in a timely manner so as to ensure client requirements are delivered and scheduled as practicable in a continuous improvement manner.
Liaise with the LORD and PSLLC product support team to address any more basic matters that might be handed to them, such as installation, commissioning, first line support, and training. The interface will also assess improvements to present levels of service.
LORD Consulting and PSLLC, offer as part of the purchase price a uniquely-conceived package of support to ensure the technology continues to remain a viable and relevant asset management tool. This support includes general on-going monitoring of the unit remotely with comment passed back promptly on pertinent issues being encountered, liaison across the operational team to the engineering team to the asset management team as to outcomes and contribution of the technology, regular generic updates as to interesting results observed from various clients, and a review of the continued contribution being made by the technology.
Importantly, whilst the DFA itself is a continuous monitor of circuit events in near real time, Lord Consulting/PSLLC does not provide the above additional contributions via a 24hr/365 system monitoring and operational alert service.
Customers may wish to purchase additional ad-hoc support through the DFA Analysis Service on a case by case basis. The DFA Analysis Service is a fee-based service under which Lord Consulting/PSLLC assist the Customer in analysing and understanding specific DFA Data and related Circuit events and doing so more fully than provided by the propriety, automated DFA Technology software alone.
Typically not, but the DFA offers a range of capabilities and complexities of data presentation which the customer can choose to access and interact with. The data is simply interpreted by operational teams with minimal training.it is the express policy of LORD and PSLLC that customers be trained and assisted to a level of understanding and utilization of the device as befits their expertise and interest. Optimal contribution from the DFA technology will be enjoyed by customers who make good effort to understand for themselves how to assess and interpret the contributions from the DFA.
Reports of abnormal events are displayed on a dedicated web page, which can be viewed through a secure login by a System Controller or Engineering staff. User-configurable emailed reports may also be chosen by the customer.
No. Indeed, not having to do such a thing was one of the key design parameters from the outset. The line to which it is connected can be configured or reconfigured at will with no impact on the outcomes. One can also elect to move the DFA to different circuits at will and they will work immediately they have been given the circuit name and CT/VT details, a very simple and quick process if the use of relay test blocks for connection to CT and VT is permitted.
The DFA operates as a single-ended device with very good results and very acceptable accuracy of site identification being possible. That said, customers will typically collate DFA inputs with other available data inputs from their system, such as SCADA records, customer feedback/inputs, and AMR data